I'm having a problem where a UIImage that I build in a playground appears correctly when I inspect it in a UIImageView within the playground...
...but incorrectly when I save it to disk.
Here is the code I'm using to build/inspect/save the UIImage:
import UIKit
import XCPlayground
// Size of view and layers
let size = CGSize(width: 180, height: 180)
// Create a layer
let layer = CALayer()
layer.frame = CGRect(origin: CGPointZero, size: size)
layer.backgroundColor = UIColor.blackColor().CGColor
// And a sublayer
let sublayer = CALayer()
sublayer.frame = CGRect(origin: CGPointZero, size: size)
sublayer.cornerRadius = 180
sublayer.backgroundColor = UIColor.whiteColor().CGColor
layer.addSublayer(sublayer)
// Render the layer into an image
UIGraphicsBeginImageContext(size)
layer.renderInContext(UIGraphicsGetCurrentContext())
let im = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// Inspect the image within the playground
let view = UIImageView(image: im)
XCPShowView("Container View", view: view)
view.layer.addSublayer(layer)
// Save the image to disk
let data = NSData(data: UIImagePNGRepresentation(im)!)
let paths = NSSearchPathForDirectoriesInDomains(NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomainMask.UserDomainMask, true)
var docs: String = paths[0] as String
let fullPath = docs.stringByAppendingPathComponent("icon.png")
let result = data.writeToFile(fullPath, atomically: true)
How can I render the image to disk to reflect what I see in the UIImageView?
The petal shaped thing is what I expect to see if the radius is larger than what makes a circle. If you click the "Show result" icon in playground next to the let view = UIImageView(image: im) line, it will show your image the exact same way as it is stored on disk. See below. I changed the colors while experimenting with it, but otherwise it is your code...
So I think that what is shown in XCPShowView("Container View", view: view) is incorrect, not the other way around.
Related
I'm trying to get the UIImage of the mask that I applied to a UIImageView.
I'm adding the mask using UIBezierPath and want the actual masked layer as UIImage, not the whole image. Think of it as a crop feature.
I'm cropping the image using:
func cropImage() {
shapeLayer.fillColor = UIColor.black.cgColor
viewSource.imageView.layer.mask = shapeLayer
viewSource.imageView.layer.masksToBounds = true
UIGraphicsBeginImageContextWithOptions(viewSource.imageView.bounds.size, false, 1)
viewSource.imageView.layer.render(in: UIGraphicsGetCurrentContext()!)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
self.completionObservable.onNext(newImage)
}
This eventually gives me the masked image on top of the old dimensions (the initial imageView width and height). But I want to have only the masked image, excluding the white background around them.
The screens are as shown:
I know what you mean now. Here is the answer, just update the size of imageContext.
UIGraphicsBeginImageContextWithOptions((shapeLayer.path?.boundingBoxOfPath)!.size, false, 1)
If it's not so simple, can try CIImage pipeline to achieve.
let context = CIContext()
let m1 = newImage?.cgImage
let m = CIImage.init(cgImage: m1!)
let bounds = imageView.layer.bounds
let cgImage = context.createCGImage(m, from: CGRect.init(x: 0, y: bounds.size.height, width: bounds.size.width, height: bounds.size.height))
let newUIImage = UIImage.init(cgImage: cgImage!)
You may need to adjust transform.
Is it possible give a circular mask/crop to an image node without jagged edges?
Following this example from Apple (https://developer.apple.com/reference/spritekit/skcropnode), the result is not ideal. You can click on the link to see.
let shapeNode = SKShapeNode()
shapeNode.physicsBody = SKPhysicsBody(circleOfRadius: radius)
shapeNode.physicsBody?.allowsRotation = false
shapeNode.strokeColor = SKColor.clearColor()
// Add a crop node to mask the profile image
// profile images (start off with place holder)
let scale = 1.0
let profileImageNode = SKSpriteNode(imageNamed: "PlaceholderUser")
profileImageNode.setScale(CGFloat(scale))
let circlePath = CGPathCreateWithEllipseInRect(CGRectMake(-radius, -radius, radius*2, radius*2), nil)
let circleMaskNode = SKShapeNode()
circleMaskNode.path = circlePath
circleMaskNode.zPosition = 12
circleMaskNode.name = "connection_node"
circleMaskNode.fillColor = SKColor.whiteColor()
circleMaskNode.strokeColor = SKColor.clearColor()
let zoom = SKAction.fadeInWithDuration(0.25)
circleMaskNode.runAction(zoom)
let cropNode = SKCropNode()
cropNode.maskNode = circleMaskNode
cropNode.addChild(profileImageNode)
cropNode.position = shapeNode.position
shapeNode.addChild(cropNode)
self.addChild(shapeNode)
UPDATE:
Ok, so here's one solution I came up. Not super ideal but it works perfectly. Essentially, I have a to size/scale, and cut the image exactly the way it would go on the SKSpriteNode so I would not have to use SKCropNode or some variation of SKShapeNode.
I used these UIImage extensions by Leo Dabus to resize/shape the image exactly as needed. Cut a UIImage into a circle Swift(iOS)
var circle: UIImage? {
let square = CGSize(width: min(size.width, size.height), height: min(size.width, size.height))
let imageView = UIImageView(frame: CGRect(origin: CGPoint(x: 0, y: 0), size: square))
imageView.contentMode = .ScaleAspectFill
imageView.image = self
imageView.layer.cornerRadius = square.width/2
imageView.layer.masksToBounds = true
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, false, scale)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
imageView.layer.renderInContext(context)
let result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return result
}
func resizedImageWithinRect(rectSize: CGSize) -> UIImage {
let widthFactor = size.width / rectSize.width
let heightFactor = size.height / rectSize.height
var resizeFactor = widthFactor
if size.height > size.width {
resizeFactor = heightFactor
}
let newSize = CGSizeMake(size.width/resizeFactor, size.height/resizeFactor)
let resized = resizedImage(newSize)
return resized
}
The final codes look like this:
//create/shape image
let image = UIImage(named: "TestImage")
let scaledImage = image?.resizedImageWithinRect(CGSize(width: 100, height: 100))
let circleImage = scaledImage?.circle
//create sprite
let sprite = SKSpriteNode(texture: SKTexture(image: circleImage!))
sprite.position = CGPoint(x: view.frame.width/2, y: view.frame.height/2)
//set texture/image
sprite.texture = SKTexture(image: circleImage!)
sprite.physicsBody = SKPhysicsBody(texture: SKTexture(image: circleImage!), size: CGSizeMake(100, 100))
if let physics = sprite.physicsBody {
//add the physic properties
}
//scale node
sprite.setScale(1.0)
addChild(sprite)
So if you have a perfectly scaled asset/image, then you probably dont need to do all this work, but I'm getting images from the backend that could come in any sizes.
There are two different techniques that can be combined to reduce the aliasing of edges created from cropping.
Create bigger images than you need, and then scale them down. Both the target (to be cropped) and the mask. Perform the cropping action, then scale down to required size.
Use very subtle blurring of the cropping shape, to soften its edges. This is best done in Photoshop or a similar editing program, to taste and need.
When these two techniques are combined, the results can be very good.
Let the stroke color to be displayed. Also, you can make line width a little thicker and the jagged edges will dissapear.
circleMaskNode.strokeColor = SKColor.whiteColor()
All you have to do is change the SKShapeNode's lineWidth property to be twice the radius of the circle:
func circularCropNode(radius: CGFloat, add: SKNode) {
let cropper = SKCropNode.init()
cropper.addChild(add)
addChild(cropper)
let circleMask = SKShapeNode.init(circleOfRadius: radius/2)
circleMask.lineWidth = radius
cropper.maskNode = circleMask
}
I want to apply an CIFilter on an UI element. I tried to apply it onto the views layer via the .filters member. However the filter won`t get applied.
Here's an approach: use UIGraphicsGetImageFromCurrentImageContext to generate a UIImage, apply the filter to that and overlay an image view containing the filtered image over your original component.
Here's a way to do that with a blur (taken from my blog):
Getting a blurred representation of a UIView is pretty simple: I need to begin an image context, use the view's layer's renderInContext method to render into the context and then get a UIImage from the context:
UIGraphicsBeginImageContextWithOptions(CGSize(width: frame.width, height: frame.height), false, 1)
layer.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
Once I have the image populated, it's a fairly standard workflow to apply a Gaussian blur to it:
guard let blur = CIFilter(name: "CIGaussianBlur") else
{
return
}
blur.setValue(CIImage(image: image), forKey: kCIInputImageKey)
blur.setValue(blurRadius, forKey: kCIInputRadiusKey)
let ciContext = CIContext(options: nil)
let result = blur.valueForKey(kCIOutputImageKey) as! CIImage!
let boundingRect = CGRect(x: -blurRadius * 4,
y: -blurRadius * 4,
width: frame.width + (blurRadius * 8),
height: frame.height + (blurRadius * 8))
let cgImage = ciContext.createCGImage(result, fromRect: boundingRect)
let filteredImage = UIImage(CGImage: cgImage)
A blurred image will be larger than its input image, so I need to be explicit about the size I require in createCGImage.
The next step is to add a UIImageView to my view and hide all the other views. I've subclassed UIImageView to BlurOverlay so that when it comes to removing it, I can be sure I'm not removing an existing UIImageView:
let blurOverlay = BlurOverlay()
blurOverlay.frame = boundingRect
blurOverlay.image = filteredImage
subviews.forEach{ $0.hidden = true }
addSubview(blurOverlay)
When it comes to de-blurring, I want to ensure the last subview is one of my BlurOverlay remove it and unhide the existing views:
func unBlur()
{
if let blurOverlay = subviews.last as? BlurOverlay
{
blurOverlay.removeFromSuperview()
subviews.forEach{ $0.hidden = false }
}
}
Finally, to see if a UIView is currently blurred, I just need to see if its last subview is a BlurOverlay:
var isBlurred: Bool
{
return subviews.last is BlurOverlay
}
I have a view in the menu bar into which I'm putting some images. I want to make it 50% transparent. I've tried:
let image = NSImage(named: "bar.pdf")
let imageView = NSImageView(frame: NSRect(x: 0, y: 0, width: 3, height: 11))
imageView.wantsLayer = true
imageView.alphaValue = 0.5
However, the image is still completely opaque.
The following snippet appears to be working in the playground. Is it possible that you are actually setting the alphaValue of one view but showing another? Also, setting the alphaValue of the view does not in its own right call for wantsLayer to be true. Finally, your code example is missing the line where you set the image of the view, though I cannot imagine how that could be relevant to your problem (the order in which you set these two properties does not matter)...
import XCPlayground
import Cocoa
let url = NSURL(string: "http://www.gravatar.com/avatar/cbfa20635c9269675d54547c080c9b64")!
let image = NSImage(contentsOfURL: url)!
let imageView = NSImageView(frame: CGRect(origin: .zero, size: image.size))
imageView.image = image
imageView.alphaValue = 0.5
XCPlaygroundPage.currentPage.liveView = imageView
Apple changed some of the names of things in newer versions of swift:
Here's a Swift 3.0 version:
let url = URL(string: "http://www.gravatar.com/avatar/cbfa20635c9269675d54547c080c9b64")!
let image = NSImage(contentsOf: url)!
let imageView = NSImageView(frame: CGRect(origin: .zero, size: image.size))
imageView.image = image
imageView.alphaValue = 0.5
PlaygroundPage.current.liveView = imageView
There is a UIImageView inside a UIScrollView which user can scale it down and up and then save it:
The area under the navigation bar and top of the tab bar is UIScrollView frame.
When user hits Done, image will be save in camera roll. It's what saved there (Photos App):
I have no idea what is this empty space in the saved image.
It's my code to save the image:
UIGraphicsBeginImageContextWithOptions(scrollView.frame.size, false, 0.0)
let rect = CGRectMake(0, scrollView.frame.origin.y, scrollView.frame.size.width, scrollView.frame.size.height)
self.view.drawViewHierarchyInRect(rect, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
let imageData = UIImageJPEGRepresentation(image, 1)
let compressedJPGImage = UIImage(data: imageData!)
UIImageWriteToSavedPhotosAlbum(compressedJPGImage!, nil, nil, nil)
What I want to save is exactly the visible region of UIScrollView.
I had to use CGContextTranslateCTM() to translate the rectangle:
let screenRect: CGRect = scrollView.bounds
UIGraphicsBeginImageContext(screenRect.size)
let ctx: CGContextRef = UIGraphicsGetCurrentContext()!
CGContextTranslateCTM(ctx,0,-scrollView.frame.origin.y)
UIColor.blackColor().set()
CGContextFillRect(ctx, screenRect)
view.layer.renderInContext(ctx)
let image: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()