How to edit high resolution images with Core Graphics? - swift

I'm trying to draw a path on a high resolution image, that's nothing complicated for an iPhone but if I add shadow to my path everything lags. It lags only when I work on images with a certain resolution (2000 x 3000) even less.
The Storyboard vies are:
-Scroll View
-Image View
-Draw View
So I have the DrawingView on top of the ImageView when I need to draw.
So the ImageView and the DrawView (view.bounds.size) have the same resolution as the image (e.g. 2000 x 3000) (there's the problem).
I'm drawing on a view with a high resolution.
I'm not directly calling drawRect: but only calling setNeedsDisplay() inside touchesBegan() and touchesMoved() after doing some operations (path.moveToPoint, path.addCurveToPoint, array operations) and adding points to my array.
In drawRect: I essentially have:
override func drawRect(rect: CGRect) {
print(self.bounds.size)
UIColor.greenColor().setStroke()
path.lineCapStyle = .Round
path.lineJoinStyle = .Round
path.lineWidth = 60.0
context = UIGraphicsGetCurrentContext()!
CGContextAddPath(context, path.CGPath)
CGContextSetShadowWithColor(context, CGSizeZero, 14.0, UIColor.whiteColor().CGColor) // <-- with this shadow it lags a lot.
path.stroke()
}
My path is a UIBezierPath().
Any ideas to improve the speed?
Update:
I followed what #brimstone said. I now have ImageView with a lower resolution, but have to apply my drawn path to the high resolution image.
(I'm trying to hand crop an image with the path that the user draws)
In this code I already got my closed path:
let layer = CAShapeLayer()
layer.path = path.CGPath
self.imageToEditView.layer.mask = layer
UIGraphicsBeginImageContext(self.imageEdited.size)
self.imageToEditView.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let croppedCGImage = CGImageCreateWithImageInRect(image.CGImage!, CGPathGetBoundingBox(path.CGPath))
let croppedImage = UIImage(CGImage: croppedCGImage!)
self.imageToEditView.image = croppedImage
self.imageToEditView.layer.mask = nil
imageToEditView.bounds.size = low resolution
imageEdited.size = high resolution
I need to set the hight resolution (I think) when i renderInContext. But how can I change the resolution of the imageView now?

Try downsizing it for the user to draw over (doesn't make a huge difference on small iPhone screens for user experience), then apply the edits to the high-res image.
To downsize images, either use UIImagePNGRepresentation, which may make your image sufficiently smaller, or (if you're still having memory issues), try using techniques in this tutorial and this answer to make it even smaller.
Then, you can take the content of what they've drawn and apply it to the high-res image.
Alternatively, look at high-res optimisation techniques by Apple: https://developer.apple.com/library/ios/documentation/2DDrawing/Conceptual/DrawingPrintingiOS/SupportingHiResScreensInViews/SupportingHiResScreensInViews.html

Related

How to adjust position of CAShapeLayer based upon device size?

I'm attempting to create a CAShapeLayer animation that draws an outline around the frame of a UILabel. Here's the code:
func newQuestionOutline() -> CAShapeLayer {
let outlineShape = CAShapeLayer()
outlineShape.isHidden = false
let circularPath = UIBezierPath(roundedRect: questionLabel.frame, cornerRadius: 5)
outlineShape.path = circularPath.cgPath
outlineShape.fillColor = UIColor.clear.cgColor
outlineShape.strokeColor = UIColor.yellow.cgColor
outlineShape.lineWidth = 5
outlineShape.strokeEnd = 0
view.layer.addSublayer(outlineShape)
return outlineShape
}
func newQuestionAnimation() {
let outlineAnimation = CABasicAnimation(keyPath: "strokeEnd")
outlineAnimation.toValue = 1
outlineAnimation.duration = 5
newQuestionOutline().add(outlineAnimation, forKey: "key")
}
The animation performs as expected when running on the simulator for an iPhone 11 which is the device size that I used in the storyboard. However when running the project on a different device with different screen dimensions (like iPhone 8 plus) the shape is drawn out of place and not around the UILabel as it should be. I used autolayout to horizontally and vertically center the UILabel to the center of the view so the UILabel is centered no matter what device.
Any suggestions? Thanks in advance!
Cheers!
A shape layer is not a view, so it is not subject to auto layout. And any time you say something like roundedRect: questionLabel.frame you are making yourself dependent on what questionLabel.frame is at that moment, which is a huge mistake because that is exactly what is not determined until auto layout determines what the frame will be (and can change later if auto layout changes its mind due to changing conditions, such as rotation etc.)
There are two kinds of solution:
Host the shape layer in a view. Now you have something that is subject to autolayout. You will still need to redraw the shape layer whenever the view changes its frame, but you can detect that and perform the redraw.
Implement your view controller's viewDidLayoutSubviews to detect that auto layout has just done its work. Respond by (for example) removing the shape layer and making a new one based on the current conditions.

Turning a UIBezierPath into a mask?

Not sure if I am asking this question correctly, but I have two components; a CIImage and a UIBezierPath. Ideally, I want to create a CGRect that encapsulates my UIBezierPath; everything inside of the path would be white, everything outside of the path would be black. This way, I can then render this CGRect to some sort of an image, which I could then use as a mask for other purposes.
I am struggling to figure out how to do this with a focus on performance. My tests, as noted below, leverage using UIGraphicsImageRenderer which is far too slow for my needs (I will be doing this on sample buffers from a camera). Therefore, I would like to stick within CoreImage. This is my attempt;
// Path
let path = UIBezierPath()
// ... define the path's shape and close it
// My source image
let image = CIImage(cgImage: UIImage(named: "test.jpg")!.cgImage!)
// Renderer
let renderer = UIGraphicsImageRenderer(size: image.extent.size)
// Render path as mask
let img = renderer.image { ctx in
ctx.cgContext.setFillColor(UIColor.black.cgColor)
ctx.cgContext.fill(CGRect(x: 0, y: 0, width: image.extent.size.width, height: image.extent.size.height))
ctx.cgContext.setFillColor(UIColor.white.cgColor)
ctx.cgContext.addPath(path.cgPath)
ctx.cgContext.drawPath(using: .fill)
}
// Put a filter on the image
let imageFiltered = image.applyingFilter("CIPhotoEffectNoir")
// Blend with mask
let maskFilter = CIFilter.blendWithMask()
maskFilter.inputImage = imageFiltered
maskFilter.backgroundImage = image
maskFilter.maskImage = CIImage(cgImage: img.cgImage!)
// Output
if let output = maskFilter.outputImage {
... use CIContext() to render back to CVPixelBuffer for preview on MTKView.
}
Overall, the goal is to have a defined portion of an image (which will not conform to a traditional shape like a square or circle) which will be filtered with a CIFilter, then composited back over the original. If there is a better approach (such as somehow taking the original image, filtering it, cropping it to the path (leaving everything outside of the path transparent) and composing, that would likely be better performant.
To note, the above sample code results in a crash as the UIGraphicsImageRenderer cannot render the mask fast enough.
Your approach looks good so far. I assume the slow part is the generation of the mask image with Core Graphics. Unfortunately, there is no direct way to do the same with Core Image directly (on the GPU). However, you can try the following:
(Assuming from your previous question that the path always has a certain shape,) you can generate a mask image containing the path once for a certain reference size of your choice. Make sure that the path doesn't "touch" the border.
Then, when you want to use it as a mask, move and scale the shape image to the correct place using transformations and let its edges extend infinitely (to cover the whole underlying image; that's why the shape shouldn't touch the edges). Something like this:
let pathImage = CIImage(cgImage: img.cgImage!)
// scale path to the size of the area you want to mask
var mask = pathImage.transformed(by: CGAffineTransform(scaleX: scaleX, y: scaleY))
// move path to the place you want to cover
mask = mask.transformed(by: CGAffineTransform(translationX: offsetX, y: offsetY))
// let mask fill the rest of the area
mask = mask.clampedToExtent()
// use mask as maskImage...
You should be able to recycle the pathImage for every frame and thereby avoiding Core Graphics and CPU-GPU-synchronization.

uibezierpath with multiple line width and a background image (swift)

I'm trying to draw several shapes (rectangle, triangle, circle, ...) in swift and for that, I use uibezierpath but I am not able to draw exactly what I want.
I need to draw for example a rectangle, but the borders of this rectangle need to have different line width.
To do that, I create different path then use the "appendpath" to merge them in one path. So that works, BUT I also need to have a background image in this rectangle.
For that, I create a layer and set it an image. The issue is that, no background image are displayed when I use "appendpath", certainly because it doesn't recognize my drawing as a rectangle.
I hope it is clear enough, but is there a way to draw a shape with a background image, and have different border width ?
Thanks for your help !!
There are two solutions I'd suggest you try:
1) Masking
Create a normal CALayer and set the image as its contents. Then create a CAShapeLayer with the path you like and use it as the first layer's mask.
E.g.:
let imageLayer = CALayer()
imageLayer.contents = UIImage(named: "yourImage")?.CGImage // Your image here
imageLayer.frame = ... // Define a frame
let maskPath = UIBezierPath(...) // Create your path here
let maskLayer = CAShapeLayer()
maskLayer.path = maskPath.CGPath
imageLayer.mask = maskLayer
Don't forget to set the right frames and paths, and you should be able to achieve the effect you wanted.
2) Fill color
Create a CAShapeLayer with the path you like, then use your image as its fillColor.
E.g.:
let path = UIBezierPath(...) // Create your path here
let layer = CAShapeLayer()
layer.path = path.CGPath
let image = UIImage(named: "yourImage") // Your image here
layer.fillColor = UIColor(patternImage: image!).CGColor
You may find this approach easier at first, but controlling the way the image fills your shape is not trivial at all.
I hope this will help.
If you'd like more details, please provide an image or a sketch of what you're trying to achieve and / or the code you've written so far. Thanks!

How to force SKTextureAtlas created from a dictionary to not modify textures size?

In my project, textures are procedurally generated from method provided by PaintCode (paint-code).
I then create a SKTextureAtlas from a dictionary filed with UIImage generated by these methods :
myAtlas = SKTextureAtlas(dictionary: myTextures)
At last, textures are retrieve from atlas using textureNamed:
var sprite1 = SKSpriteNode(texture:myAtlas.textureNamed("texture1"))
But displayed nodes are double sized on iPhone4S simulator. And triple sized on iPhone 6 Plus simulator.
It seems that at init, atlas compute images at the device resolution.
But generated images already have the correct size and do not need to be changed. See Drawing Method below.
Here is the description of the generated image:
<UIImage: 0x7f86cae56cd0>, {52, 52}
And the description of the corresponding texture in atlas:
<SKTexture> 'image1' (156 x 156)
This for iPhone 6 Plus, using #3x images, that's why size is x3.
And for iPhone 4S, using #2x images, as expected:
<UIImage: 0x7d55dde0>, {52, 52}
<SKTexture> 'image1' (156 x 156)
At last, the scaleproperty for generated UIImage is set to the right device resolution: 2.0 for #2x (iPhone 4S) and 3.0 for #3x (iPhone 6 Plus).
The Question
So what can I do to avoid atlas resizing the pictures?
Drawing method
PaintCode generate drawing methods as the following:
public class func imageOfCell(#frame: CGRect) -> UIImage {
UIGraphicsBeginImageContextWithOptions(frame.size, false, 0)
StyleKit.drawCell(frame: frame)
let imageOfCell = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return imageOfCell
}
Update 1
Comparing two approaches to generate SKTextureAtlas
// Some test image
let testImage:UIImage...
// Atlas creation
var myTextures = [String:UIImage]()
myTextures["texture1"] = testImage
myAtlas = SKTextureAtlas(dictionary: myTextures)
// Create two textures from the same image
let texture1 = myAtlas.textureNamed("texture1")
let texture2 = SKTexture(image:testImage)
// Wrong display : node is oversized
var sprite1 = SKSpriteNode(texture:texture1)
// Correct display
var sprite2 = SKSpriteNode(texture:texture2)
It seems that the problem lie on SKTextureAtlas from a dictionary as as SKSpriteNode initialization does not use scale property from UIImage to correctly size the node.
Here are descriptions on console:
- texture1: '' (84 x 84)
- texture2: 'texture1' (84 x 84)
texture2 miss some data! That could explain the lack of scale information to properly size the node as:
node's size = texture's size divide by texture's scale.
Update 2
The problem occur when the scale property of UIImage is different than one.
So you can use the following method to generate picture:
func imageOfCell(frame: CGRect, color:SKColor) -> UIImage {
UIGraphicsBeginImageContextWithOptions(frame.size, false, 0)
var bezierPath = UIBezierPath(rect: frame)
color.setFill()
bezierPath.fill()
let imageOfCell = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return imageOfCell
}
The problem come from the use of SKTextureAtlas(dictionary:) to initialize atlas.
SKTexture created using this method does not embed data related to image's scale property. So during the creation of SKSpriteNode by init(texture:) the lack of scale information in texture leads to choose texture's size in place of image's size.
One way to correct it is to provide node's size during SKSpriteNode creation: init(texture:size:)
From the documentation for the scale parameter for UIGraphicsBeginImageContextWithOptions,
The scale factor to apply to the bitmap. If you specify a value of
0.0, the scale factor is set to the scale factor of the device’s main screen.
Therefore, if you want the textures to be the same "size" across all devices, set this value to 1.0.
EDIT:
override func didMoveToView(view: SKView) {
let image = imageOfCell(CGRectMake(0, 0, 10, 10),scale:0)
let dict:[String:UIImage] = ["t1":image]
let texture = SKTextureAtlas(dictionary: dict)
let sprite1 = SKSpriteNode(texture: texture.textureNamed("t1"))
sprite1.position = CGPointMake (CGRectGetMidX(view.frame),CGRectGetMidY(view.frame))
addChild(sprite1)
println(sprite1.size)
// prints (30.0, 30.0) if scale = 0
// prints (10,0, 10,0) if scale = 1
}
func imageOfCell(frame: CGRect, scale:CGFloat) -> UIImage {
UIGraphicsBeginImageContextWithOptions(frame.size, false, scale)
var bezierPath = UIBezierPath(rect: frame)
UIColor.whiteColor().setFill()
bezierPath.fill()
let imageOfCell = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return imageOfCell
}

How to Scale UIBezierPath to Fit Current View (in Swift)

I've created a class that draws a coffee mug using code I imported from PaintCode and I applied this class to a view. Using #IBDesignable, I can see in my storyboard that the mug is being drawn inside the view, however the overall shape is too big. I could redraw the shape in code so that it fits the current size of the view, but isn't there a way to scale the shape after it is drawn so that as my view changes size on different devices the shape is scaled correctly?
I've looked into CGContextScaleCTM(aRef, <#sx: CGFloat#>, <#sy: CGFloat#>) but I am not sure how to convert the CGRect of my view's bounds to the right scale factor
I didn't want to post all of it, but my drawing code begins like this
bezierPath.moveToPoint(CGPointMake(64.8, 52.81))
bezierPath.addCurveToPoint(CGPointMake(58.89, 43.44), controlPoint1: CGPointMake(64.21, 48.28), controlPoint2: CGPointMake(62.11, 44.95))
bezierPath.addCurveToPoint(CGPointMake(56.82, 42.76), controlPoint1: CGPointMake(58.24, 43.13), controlPoint2: CGPointMake(57.55, 42.9))
This goes on then
bezierPath.closePath()
bezierPath.miterLimit = 4
bezierPath.usesEvenOddFillRule = true;
Then there are are two other chunks of drawing code for drawing two little lines for the coffee steam. I append these two paths to the original bezierPath, then I set a fill color and fill the whole shape.
In code you can just scale your paths as you want using this UIBezierPath swift extension PaintCodeScale.
e.g
bezierPath.fit(into: rect).moveCenter(to: rect.center).fill()
Since I used PaintCode to generate my drawing code, I found a way to implement #dasdom's suggestion using help from the app.
In PaintCode there is a "frame" tool which you can place around your drawing. This enables constraints for your artwork so that the vectors are re-drawn relative to the frame size. The frame is a variable that is exported along with your code when you bring it into Xcode. When I added the drawing code to my class in Xcode and then added the class to my view in Storyboard, Xcode automatically scaled the frame to the view size and thus the drawing code within my class was also autmatically resized to fit my view. Now, the artwork will be automatically re-drawn to fit whatever view I add my class to. The automatic re-sizing may be occurring due to the "Automatically resize subviews" option that is enabled in Storyboard for the view that I have applied my graphics class to.
func scalePath(path: UIBezierPath) -> UIBezierPath {
let w1: CGFloat = path.bounds.size.width
let h1: CGFloat = path.bounds.size.height
let w2: CGFloat = self.frame.width
let h2: CGFloat = self.frame.height
var s: CGFloat = 1.0
// take the smaller one and scale 1:1 to fit (to keep the aspect ratio)
if w2 <= h2 {
s = w2 / w1
} else {
s = h2 / h1
}
path.apply(CGAffineTransform(scaleX: s, y: s))
return (path)
}