How to crop UIImage to mask in Swift - swift

I have a UIImage that I mask using another UIImage. The only problem is area outside the masked UIImage is still user interactable. How do I completely crop a UIImage to another image instead of mask.
#IBOutlet weak var imageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
let imageMask = UIImageView()
imageMask.image = //image to mask
imageMask.frame = photoImageView.bounds
imageView.mask = imageMask
}

Criteria
A simple test case could define a background color for the view of the ViewController and load the image and mask. Then a UITapGestureRecognizer is added to the ViewController view and to the UIImageView.
When applying a background color to the ViewController view, it is easy to see if masking works.
If you then tap on a non-transparent area, the tap should be received by the UIImageView, otherwise the ViewController view should receive the tap.
Image and Mask Image Size
In most cases, the image and mask image size or at least the aspect ratio of the image and mask image is the same.
It makes sense to use the same contentMode for the masking of UIImageView as for the original UIImageView, otherwise there would be a misalignment when changing the content mode in InterfaceBuilder at the latest.
Test Case
Therefore the test case could look like this:
import UIKit
class ViewController: UIViewController {
#IBOutlet weak var imageView: UIImageView!
private let maskView = UIImageView()
override func viewDidLoad() {
super.viewDidLoad()
self.imageView.image = UIImage(named: "testimage")
self.maskView.image = UIImage(named: "mask")
self.imageView.mask = maskView
let tapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(backgroundTapped))
self.view.addGestureRecognizer(tapGestureRecognizer)
let imageViewGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(iamgeViewTapped))
self.imageView.addGestureRecognizer(imageViewGestureRecognizer)
}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
self.maskView.contentMode = self.imageView.contentMode
self.maskView.frame = self.imageView.bounds
}
#objc private func backgroundTapped() {
print ("background tapped!")
}
#objc private func iamgeViewTapped() {
print ("image view tapped!")
}
}
This code is already running. As expected, however, taps on the transparent area of the UIImageView also get here.
CustomImageView
Therefore we need a CustomImageView, which returns when clicking on a transparent pixel that it is not responsible for it.
This can be achieved by overriding this method:
func point(inside point: CGPoint,
with event: UIEvent?) -> Bool
see documentation here: https://developer.apple.com/documentation/uikit/uiview/1622533-point
Returns a Boolean value indicating whether the receiver contains the specified point.
There is this cool answer already on SO, that is just slightly adapted: https://stackoverflow.com/a/27923457
import UIKit
class CustomImageView: UIImageView {
override func point(inside point: CGPoint, with event: UIEvent?) -> Bool {
return self.alphaFromPoint(point: point) > 32
}
private func alphaFromPoint(point: CGPoint) -> UInt8 {
var pixel: [UInt8] = [0, 0, 0, 0]
let colorSpace = CGColorSpaceCreateDeviceRGB();
let alphaInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
if let context = CGContext(data: &pixel,
width: 1,
height: 1,
bitsPerComponent: 8,
bytesPerRow: 4,
space: colorSpace,
bitmapInfo: alphaInfo.rawValue) {
context.translateBy(x: -point.x, y: -point.y)
self.layer.render(in: context)
}
return pixel[3]
}
}
Don't forget to change the custom class of ImageView to CustomImageView in Xcode in the identity inspector.
If you now tap on transparent areas, the view of the ViewController in the background gets the tap. If you tap on non-transparent areas our image view receives the tap.
Demo
Here is a short demo of the above code using the image and mask from the question:

Related

How do I update a CALayer with a CVPixelBuffer/IOSurface?

I have an IOSurface-backed CVPixelBuffer that is getting updated from an outside source at 30fps. I want to render a preview of the image data in an NSView -- what's the best way for me to do that?
I can directly set the .contents of a CALayer on the view, but that only updates the first time my view updates (or if, say, I resize the view). I've been poring over the docs but I can't figure out the correct invocation of needsDisplay on the layer or view to let the view infrastructure know to refresh itself, especially when updates are coming from outside the view.
Ideally I'd just bind the IOSurface to my layer and any changes I make to it would be propagated, but I'm not sure if that's possible.
class VideoPreviewController: NSViewController, VideoFeedConsumer {
let customLayer : CALayer = CALayer()
override func viewDidLoad() {
super.viewDidLoad()
// Do view setup here.
print("Loaded our video preview")
view.layer?.addSublayer(customLayer)
customLayer.frame = view.frame
// register our view with the browser service
VideoFeedBrowser.instance.registerConsumer(self)
}
override func viewWillDisappear() {
// deregister our view from the video feed
VideoFeedBrowser.instance.deregisterConsumer(self)
super.viewWillDisappear()
}
// This callback gets called at 30fps whenever the pixelbuffer is updated
#objc func updateFrame(pixelBuffer: CVPixelBuffer) {
guard let surface = CVPixelBufferGetIOSurface(pixelBuffer)?.takeUnretainedValue() else {
print("pixelbuffer isn't IOsurface backed! noooooo!")
return;
}
// Try and tell the view to redraw itself with new contents?
// These methods don't work
//self.view.setNeedsDisplay(self.view.visibleRect)
//self.customLayer.setNeedsDisplay()
self.customLayer.contents = surface
}
}
Here's my attempt of a scaling version that's NSView rather than NSViewController-based, that also doesn't update correctly (or scale correctly for that matter):
class VideoPreviewThumbnail: NSView, VideoFeedConsumer {
required init?(coder decoder: NSCoder) {
super.init(coder: decoder)
self.wantsLayer = true
// register our view with the browser service
VideoFeedBrowser.instance.registerConsumer(self)
}
override init(frame frameRect: NSRect) {
super.init(frame: frameRect)
self.wantsLayer = true
// register our view with the browser service
VideoFeedBrowser.instance.registerConsumer(self)
}
deinit{
VideoFeedBrowser.instance.deregisterConsumer(self)
}
override func updateLayer() {
// Do I need to put something here?
print("update layer")
}
#objc
func updateFrame(pixelBuffer: CVPixelBuffer) {
guard let surface = CVPixelBufferGetIOSurface(pixelBuffer)?.takeUnretainedValue() else {
print("pixelbuffer isn't IOsurface backed! noooooo!")
return;
}
self.layer?.contents = surface
self.layer?.transform = CATransform3DMakeScale(
self.frame.width / CGFloat(CVPixelBufferGetWidth(pixelBuffer)),
self.frame.height / CGFloat(CVPixelBufferGetHeight(pixelBuffer)),
CGFloat(1))
}
}
What am I missing?
Maybe I'm wrong, but I think you are you updating your NSView on a background thread. (I suppose that the callback to updateFrame is on a background thread)
If I'm right, when you want to update the NSView, convert your pixelBuffer to whatever you want (NSImage?), and then dispatch it on the main thread.
Pseudocode (I don't work often with CVPixelBuffer so I'm not sure this is the right way to convert to an NSImage)
let ciImage = CIImage(cvImageBuffer: pixelBuffer)
let context = CIContext(options: nil)
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
let cgImage = context.createCGImage(ciImage, from: CGRect(x: 0, y: 0, width: width, height: height))
let nsImage = NSImage(cgImage: cgImage, size: CGSize(width: width, height: height))
DispatchQueue.main.async {
// assign the NSImage to your NSView here
}
Another catch: I did some tests, and it seems that you cannot assign an IOSurface directly to the contents of a CALayer.
I tried with this:
let textureImageWidth = 1024
let textureImageHeight = 1024
let macPixelFormatString = "ARGB"
var macPixelFormat: UInt32 = 0
for c in macPixelFormatString.utf8.reversed() {
macPixelFormat *= 256
macPixelFormat += UInt32(c)
}
let ioSurface = IOSurfaceCreate([kIOSurfaceWidth: textureImageWidth,
kIOSurfaceHeight: textureImageHeight,
kIOSurfaceBytesPerElement: 4,
kIOSurfaceBytesPerRow: textureImageWidth * 4,
kIOSurfaceAllocSize: textureImageWidth * textureImageHeight * 4,
kIOSurfacePixelFormat: macPixelFormat] as CFDictionary)!
IOSurfaceLock(ioSurface, IOSurfaceLockOptions.readOnly, nil)
let test = CIImage(ioSurface: ioSurface)
IOSurfaceUnlock(ioSurface, IOSurfaceLockOptions.readOnly, nil)
v1?.layer?.contents = ioSurface
Where v1 is my view. No effect
Even with a CIImage no effect (just last few lines)
IOSurfaceLock(ioSurface, IOSurfaceLockOptions.readOnly, nil)
let test = CIImage(ioSurface: ioSurface)
IOSurfaceUnlock(ioSurface, IOSurfaceLockOptions.readOnly, nil)
v1?.layer?.contents = test
If I create a CGImage it works
IOSurfaceLock(ioSurface, IOSurfaceLockOptions.readOnly, nil)
let test = CIImage(ioSurface: ioSurface)
IOSurfaceUnlock(ioSurface, IOSurfaceLockOptions.readOnly, nil)
let context = CIContext.init()
let img = context.createCGImage(test, from: test.extent)
v1?.layer?.contents = img
I encountered this problem myself and the solution is to double buffer the IOSurface source: use two IOSurface objects instead of one and render to the current surface, set the surface to the layer contents and then on the next rendering pass use the alternate (back/front) surface and then swap.
It would appear that setting the CALayer.contents twice to the same CVPixelBufferRef has no effect. However, if you alternate between two IOSurfaceRef it works wonderfully.
It maybe also possible to invalidate the layer contents by setting it to nil and then reset. I did not try that case but am using the double buffer technique.
If you have some IBActions that update it then create an observed variable with the didSet block and whenever the IBAction is triggered, change its value. Also remember to write the code you want to run when updated in that block.
I'd suggest making the variable an Int, set its default value to 0 and add 1 to it every time it updates.
And you can cast the NSView into an NSImageView for the part where you ask about showing the IMAGE data on an NSView so that does the job.
You need to convert the pixel buffer to CGImage and convert it to a layer so that you can change the layer of the main view.
Please try this code
#objc
func updateFrame(pixelBuffer: CVPixelBuffer) {
guard let surface = CVPixelBufferGetIOSurface(pixelBuffer)?.takeUnretainedValue() else {
print("pixelbuffer isn't IOsurface backed! noooooo!")
return;
}
void *baseAddr = CVPixelBufferGetBaseAddress(pixelBuffer);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef cgContext = CGBitmapContextCreate(baseAddr, width, height, 8, CVPixelBufferGetBytesPerRow(pixelBuffer), colorSpace, kCGImageAlphaNoneSkipLast);
CGImageRef cgImage = CGBitmapContextCreateImage(cgContext);
CGContextRelease(cgContext);
let outputImage = UIImage(cgImage: outputCGImage, scale: 1, orientation: img.imageOrientation)
let newLayer:CGLayer = CGLayer.init(cgImage: outputImage)
self.layer = newLayer
CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
CVPixelBufferRelease(pixelBuffer);
}

How to give an imageView in swift3.0.1 shadow at the same time with rounded corners

I want to give an imageView a shadow at the same time with rounded corners,but I failed.
Here is my solution
Basic idea :
Use an Extra view (say AView) as super view of image view (to those views on which you are willing to have shado) and assign that view class to DGShadoView
Pin Image view to AView (that super view)from left, right, top and bottom with constant 5
Set back ground color of the AView to clear color from storybosrd's Property inspector this is important
Inside idea: Here we are using a Bezier path on the Aview nearly on border and setting all rounded corner properties and shadow properties to that path and we are placing our target image view lie with in that path bound
#IBDesignable
class DGShadoView:UIView {
override func draw(_ rect: CGRect) {
self.rect = rect
decorate(rect: self.rect)
}
func decorate(rect:CGRect) {
//self.backgroundColor = UIColor.clear
//IMPORTANT: dont forgot to set bg color of your view to clear color from story board's property inspector
let ref = UIGraphicsGetCurrentContext()
let contentRect = rect.insetBy(dx: 5, dy: 5);
/*create the rounded oath and fill it*/
let roundedPath = UIBezierPath(roundedRect: contentRect, cornerRadius: 5)
ref!.setFillColor("your color for background".cgColor)
ref!.setShadow(offset: CGSize(width:0,height:0), blur: 5, color: "your color for shado".cgColor)
roundedPath.fill()
/*draw a subtle white line at the top of view*/
roundedPath.addClip()
ref!.setStrokeColor(UIColor.red.cgColor)
ref!.setBlendMode(CGBlendMode.overlay)
ref!.move(to: CGPoint(x:contentRect.minX,y:contentRect.minY+0.5))
ref!.addLine(to: CGPoint(x:contentRect.maxX,y:contentRect.minY+0.5))
}
}
Update
Extension Approach
There is another Approach. Just Make a class with empty and paste Following UIImageView Extension code, Assign this subclass to that ImageView on which you shadow.
import UIKit
class DGShadowView: UIImageView {
#IBInspectable var intensity:Float = 0.2{
didSet{
setShadow()
}
}
override func layoutSubviews()
{
super.layoutSubviews()
setShadow()
}
func setShadow(){
let shadowPath = UIBezierPath(rect: bounds)
layer.masksToBounds = false
layer.shadowColor = UIColor.black.cgColor
layer.shadowOffset = CGSize(width: 0.0, height: 0.3)
layer.shadowOpacity = intensity
layer.shadowPath = shadowPath.cgPath
}
}
The solution is to create two separate views. One for the shadow and one for the image itself. On the imageView you clipToBounds the layer so that the corner radius is properly added.
Put the imageView on top of the shadowView and you've got your solution!

adding NSImageView to NSScrollView

I have an image with a dimension of about 200x2000 pixels. I want to display the image centered in a 200x200 rectangle but I want to be able to move it up and down. Best I can figure I need to add an NSImageView to an NSScrollView but I can't figure out how or even if this is the best way. This is my first day of OS X development...
After some googling I found this from which I was able to come up with this
class MasterViewController: NSViewController {
var Photo: NSImageView!
#IBOutlet var scroll: NSScrollView!
override func viewDidLoad() {
super.viewDidLoad()
var imageRect: NSRect
self.Photo = NSImageView.init()
self.Photo.image = NSImage.init(named:"horizon")
imageRect = NSMakeRect(0.0, 0.0, self.Photo.image!.size.width, self.Photo.image!.size.height)
print("image size", imageRect)
self.Photo = NSImageView(frame: imageRect)
self.Photo.setBoundsSize(NSSize(width: imageRect.width, height: imageRect.height))
self.Photo.imageScaling = NSImageScaling.ScaleNone
self.scroll.setFrameSize(NSSize(width: imageRect.width,height: imageRect.width))
self.scroll.hasVerticalScroller = true
self.scroll.hasHorizontalScroller = true
self.Photo.setFrameSize(CGSize(width: imageRect.width,height: imageRect.width))
self.scroll.documentView = self.Photo
//print(self.scroll.documentView?.frame)
//self.scroll.setC contentSize = NSSize(width: 200, height: 2000)
//self.Photo.image = NSImage.init(named:"bezel")
//self.scroll.addSubview(self.Photo)
}
but I can't get the image to show up inside the scrollview
#lusher00: in your example, you initialize twice self.Photo, first with an init() and then with (frame: imageRect), this probably explains why image don't show up.
You could set the imageView as the documentView of the NSScrollView, as below:
#IBOutlet var scrollView: NSScrollView!
var imageRect: NSRect
// Initialize below the imageView image with appropriate content (to adapt)
imageView.image = NSImage.init(named:"horizon")
imageRect = NSMakeRect(0.0, 0.0, imageView.image!.size.width, imageView.image!.size.height)
imageView.setFrameSize(CGSize(width: imageRect.width, height: imageRect.height))
imageView.imageScaling = NSImageScaling.ScaleNone
scrollView.documentView = imageView
scrollView.hasVerticalScroller = true
scrollView.hasHorizontalScroller = true
Just add the image view as a subview of the scrollview.
scrollView.addSubview(imageView)
You can set the position of the imageview, which will position it inside the scrollview.
The scrollview needs to know how large its content size is to enable scrolling of that area. So dont forget to update the contentSize property of the scrollview.
E.g. adding the imageView of 200x200 to the scrollView with a frame of 200x200. Setting the contentSize to 400x400 and calling imageView.center = scrollView.center, will center the image and allow some scrolling around the image, within the 200x200 visible frame of the scrollview.
You can also get the current offset of the scrollView by checking contentOffset.
If you need to track as the user scrolls, you can use scrollViewDidScroll. Check the docs for the scrollview for some other options.

How do I make a UIScrollView scrollable only when touches are inside a custom shape?

I am working on creating an image collage app. And I am going to have multiple UIScrollView's. The scroll views will have boundaries with custom shapes and the user will be able to dynamically change the corners of the shapes where they intersect. The scroll views have UIImageView's as subviews.
The scroll views are subviews of other UIView's. I applied a CAShapeLayer mask to each of these UIView's. That way I can mask the scroll views with no problem.
But the problem is that, I can only scroll the contents of the last scroll view added. Also, I can pan and zoom beyond the boundaries of the masks. I should only able to pan or zoom when I am touching inside the boundaries of the polygons that I have as masks.
I tried;
scrollView.clipsToBounds = true
scrollView.layer.masksToBounds = true
But the result is the same.
Unfortunately I'm not able to post screenshots but, here is the code that I use to create masks for the UIViews:
func createMask(v: UIView, viewsToMask: [UIView], anchorPoint: CGPoint)
{
let frame = v.bounds
var shapeLayer = [CAShapeLayer]()
var path = [CGMutablePathRef]()
for i in 0...3 {
path.append(CGPathCreateMutable())
shapeLayer.append(CAShapeLayer())
}
//define frame constants
let center = CGPointMake(frame.origin.x + frame.size.width / 2, frame.origin.y + frame.size.height / 2)
let bottomLeft = CGPointMake(frame.origin.x, frame.origin.y + frame.size.height)
let bottomRight = CGPointMake(frame.origin.x + frame.size.width, frame.origin.y + frame.size.height)
switch frameType {
case 1:
// First view for Frame Type 1
CGPathMoveToPoint(path[0], nil, 0, 0)
CGPathAddLineToPoint(path[0], nil, bottomLeft.x, bottomLeft.y)
CGPathAddLineToPoint(path[0], nil, anchorPoint.x, bottomLeft.y)
CGPathAddLineToPoint(path[0], nil, anchorPoint.x, anchorPoint.y)
CGPathCloseSubpath(path[0])
// Second view for Frame Type 1
CGPathMoveToPoint(path[1], nil, anchorPoint.x, anchorPoint.y)
CGPathAddLineToPoint(path[1], nil, anchorPoint.x, bottomLeft.y)
CGPathAddLineToPoint(path[1], nil, bottomRight.x, bottomRight.y)
CGPathAddLineToPoint(path[1], nil, bottomRight.x, anchorPoint.y)
CGPathCloseSubpath(path[1])
// Third view for Frame Type 1
CGPathMoveToPoint(path[2], nil, 0, 0)
CGPathAddLineToPoint(path[2], nil, anchorPoint.x, anchorPoint.y)
CGPathAddLineToPoint(path[2], nil, bottomRight.x, anchorPoint.y)
CGPathAddLineToPoint(path[2], nil, bottomRight.x, 0)
CGPathCloseSubpath(path[2])
default:
break
}
for (key, view) in enumerate(viewsToMask) {
shapeLayer[key].path = path[key]
view.layer.mask = shapeLayer[key]
}
}
So, how can I make the scroll views behave in such a way that they will only scroll or zoom content when touches happen inside their corresponding mask boundaries?
EDIT:
According to the answer to this question: UIView's masked-off area still touchable? the masks only modify what you can see, not the area that you can touch. So I subclassed the UIScrollView and tried to override the hitTest:withEvent: method like so,
protocol CoolScrollViewDelegate: class {
var scrollViewPaths: [CGMutablePathRef] { get set }
}
class CoolScrollView: UIScrollView
{
weak var coolDelegate: CoolScrollViewDelegate?
override func hitTest(point: CGPoint, withEvent event: UIEvent?) -> UIView?
{
if CGPathContainsPoint(coolDelegate?.scrollViewPaths[tag], nil, point, true) {
return self
} else {
return nil
}
}
}
But with this implementation, I can only check against the last scroll view and path boundaries change when I zoom in. For example if I zoom in on the image the hitTest:withEvent: method returns nil.
I would agree with #Kendel in the comments - to start with it might be an easier approach to create a UIScrollView subclass that knows how to mask itself with a particular shape. Keeping the shape logic within a scroll view subclass will keep things tidy, and allow you to easily restrict touches to within the shape (I'll come to that in a minute).
It's a little hard to tell from your description exactly how your shaped views should behave, but as a brief example your ShapedScrollView might look like something like this:
import UIKit
class ShapedScrollView: UIScrollView {
// MARK: Types
enum Shape {
case First // Choose a better name!
}
// MARK: Properties
private let shapeLayer = CAShapeLayer()
var shape: Shape = .First {
didSet { setNeedsLayout() }
}
// MARK: Initializers
init(frame: CGRect, shape: Shape = .First) {
self.shape = shape
super.init(frame: frame)
}
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}
// MARK: Layout
override func layoutSubviews() {
super.layoutSubviews()
updateShape()
}
// MARK: Updating the Shape
private func updateShape() {
// Disable core animation actions to prevent changes to the shape layer animating implicitly
CATransaction.begin()
CATransaction.setDisableActions(true)
if bounds.size != shapeLayer.bounds.size {
// Bounds size has changed, completely update the shape
shapeLayer.frame = CGRect(origin: contentOffset, size: bounds.size)
shapeLayer.path = pathForShape(shape).CGPath
layer.mask = shapeLayer
} else {
// Bounds size has NOT changed, just update origin of shape path to
// match content offset - makes it appear stationary as we scroll
var shapeFrame = shapeLayer.frame
shapeFrame.origin = contentOffset
shapeLayer.frame = shapeFrame
}
CATransaction.commit()
}
private func pathForShape(shape: Shape) -> UIBezierPath {
let path = UIBezierPath()
switch shape {
case .First:
// Build the shape path, whatever that might be...
// path.moveToPoint(...)
// ...
}
return path
}
}
So making the touches only work inside the specified shape is the easy part. We already have a reference to a shape layer that describes the shape we want to restrict touches to. UIView provides a helpful hit-testing method that lets you specify whether or not a particular point should be considered to be "inside" that view: pointInside(_:withEvent:). Simply add the following override to ShapedScrollView:
override func pointInside(point: CGPoint, withEvent event: UIEvent?) -> Bool {
return CGPathContainsPoint(shapeLayer.path, nil, layer.convertPoint(point, toLayer: shapeLayer), false)
}
This just says: "If point (converted to the shape layer's coordinate system) is inside the shape's path, consider it to be inside the view; otherwise consider it outside the view."
If a scroll view that masks itself isn't appropriate, you can still adopt this technique by using a ShapedScrollContainerView: UIView with a scrollView property. Then, apply the shape mask to the container as above, and again use pointInside(_:withEvent:) to test whether it should respond to particular touch points.

Swift: Mask Alignment + Auto-Layout Constraints

I have this PNG file, which I'd like to use as a mask for a UIView.
The view must be:
20 pixels/points in from each side
A perfect square
Centered vertically
I set the following constraints to accomplish this:
However, it seems these constraints don't play well with masks. When these constraints and the mask property are set, I get the following:
but I'd like the view to look just like the mask above, except orange (The backgroundColor here is just for simplicity—I later add subviews that need to be masked.)
However, when no constraints are set, the mask seems to work properly and I get something like this (borderColor added for visual purposes only):
Here's my code (viewForLayer is a UIView I made in the storyboard):
viewForLayer.layer.borderColor = UIColor.redColor().CGColor
viewForLayer.layer.borderWidth = 10
var mask = CALayer()
mask.contents = UIImage(named: "TopBump")!.CGImage
mask.frame = CGRect(x: 0, y: 0, width: viewForLayer.bounds.width, height: viewForLayer.bounds.height)
mask.position = CGPoint(x: viewForLayer.bounds.width/2, y: viewForLayer.bounds.height/2)
viewForLayer.layer.mask = mask
viewForLayer.backgroundColor = UIColor.orangeColor()
The problem is though, that now the view isn't the right size or in the right position—it doesn't follow the rules above—"The view must be: ". How can I have the mask work properly, and the auto-layout constraints set at the same time?
I found a way around it. Not sure if this is the best way but here we go...
http://imgur.com/pUIZbNA
Just make sure you change the name of the UIView class in the storyboard inspector too. Apparently, the trick is to set the mask frame for each layoutSubviews call.
class MaskView : UIView {
override func layoutSubviews() {
super.layoutSubviews()
if let mask = self.layer.mask {
mask.frame = self.bounds
}
}
}
class ViewController: UIViewController {
#IBOutlet weak var viewForLayer: MaskView!
override func viewDidLoad() {
super.viewDidLoad()
let image = UIImage(named: "TopBump")!.CGImage!
let maskLayer = CALayer()
maskLayer.contents = image
maskLayer.frame = viewForLayer.bounds
viewForLayer.layer.mask = maskLayer
viewForLayer.backgroundColor = UIColor.orangeColor()
// Do any additional setup after loading the view, typically from a nib.
viewForLayer.layer.borderColor = UIColor.redColor().CGColor
viewForLayer.layer.borderWidth = 10
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
}
I tried it for myself. Minus the nitpicking on 'let mask = CALayer()' (it's immutable reference to an updatable object), changing the autolayout constraints of the embedded view shows the mask is aligned correctly.
NSLog("\(viewForLayer.bounds.width), \(viewForLayer.bounds.height)")
returns 375.0, 667.0 on an iPhone 6 screen. What are you getting?