How to Scale UIBezierPath to Fit Current View (in Swift) - swift

I've created a class that draws a coffee mug using code I imported from PaintCode and I applied this class to a view. Using #IBDesignable, I can see in my storyboard that the mug is being drawn inside the view, however the overall shape is too big. I could redraw the shape in code so that it fits the current size of the view, but isn't there a way to scale the shape after it is drawn so that as my view changes size on different devices the shape is scaled correctly?
I've looked into CGContextScaleCTM(aRef, <#sx: CGFloat#>, <#sy: CGFloat#>) but I am not sure how to convert the CGRect of my view's bounds to the right scale factor
I didn't want to post all of it, but my drawing code begins like this
bezierPath.moveToPoint(CGPointMake(64.8, 52.81))
bezierPath.addCurveToPoint(CGPointMake(58.89, 43.44), controlPoint1: CGPointMake(64.21, 48.28), controlPoint2: CGPointMake(62.11, 44.95))
bezierPath.addCurveToPoint(CGPointMake(56.82, 42.76), controlPoint1: CGPointMake(58.24, 43.13), controlPoint2: CGPointMake(57.55, 42.9))
This goes on then
bezierPath.closePath()
bezierPath.miterLimit = 4
bezierPath.usesEvenOddFillRule = true;
Then there are are two other chunks of drawing code for drawing two little lines for the coffee steam. I append these two paths to the original bezierPath, then I set a fill color and fill the whole shape.

In code you can just scale your paths as you want using this UIBezierPath swift extension PaintCodeScale.
e.g
bezierPath.fit(into: rect).moveCenter(to: rect.center).fill()

Since I used PaintCode to generate my drawing code, I found a way to implement #dasdom's suggestion using help from the app.
In PaintCode there is a "frame" tool which you can place around your drawing. This enables constraints for your artwork so that the vectors are re-drawn relative to the frame size. The frame is a variable that is exported along with your code when you bring it into Xcode. When I added the drawing code to my class in Xcode and then added the class to my view in Storyboard, Xcode automatically scaled the frame to the view size and thus the drawing code within my class was also autmatically resized to fit my view. Now, the artwork will be automatically re-drawn to fit whatever view I add my class to. The automatic re-sizing may be occurring due to the "Automatically resize subviews" option that is enabled in Storyboard for the view that I have applied my graphics class to.

func scalePath(path: UIBezierPath) -> UIBezierPath {
let w1: CGFloat = path.bounds.size.width
let h1: CGFloat = path.bounds.size.height
let w2: CGFloat = self.frame.width
let h2: CGFloat = self.frame.height
var s: CGFloat = 1.0
// take the smaller one and scale 1:1 to fit (to keep the aspect ratio)
if w2 <= h2 {
s = w2 / w1
} else {
s = h2 / h1
}
path.apply(CGAffineTransform(scaleX: s, y: s))
return (path)
}

Related

How to adjust position of CAShapeLayer based upon device size?

I'm attempting to create a CAShapeLayer animation that draws an outline around the frame of a UILabel. Here's the code:
func newQuestionOutline() -> CAShapeLayer {
let outlineShape = CAShapeLayer()
outlineShape.isHidden = false
let circularPath = UIBezierPath(roundedRect: questionLabel.frame, cornerRadius: 5)
outlineShape.path = circularPath.cgPath
outlineShape.fillColor = UIColor.clear.cgColor
outlineShape.strokeColor = UIColor.yellow.cgColor
outlineShape.lineWidth = 5
outlineShape.strokeEnd = 0
view.layer.addSublayer(outlineShape)
return outlineShape
}
func newQuestionAnimation() {
let outlineAnimation = CABasicAnimation(keyPath: "strokeEnd")
outlineAnimation.toValue = 1
outlineAnimation.duration = 5
newQuestionOutline().add(outlineAnimation, forKey: "key")
}
The animation performs as expected when running on the simulator for an iPhone 11 which is the device size that I used in the storyboard. However when running the project on a different device with different screen dimensions (like iPhone 8 plus) the shape is drawn out of place and not around the UILabel as it should be. I used autolayout to horizontally and vertically center the UILabel to the center of the view so the UILabel is centered no matter what device.
Any suggestions? Thanks in advance!
Cheers!
A shape layer is not a view, so it is not subject to auto layout. And any time you say something like roundedRect: questionLabel.frame you are making yourself dependent on what questionLabel.frame is at that moment, which is a huge mistake because that is exactly what is not determined until auto layout determines what the frame will be (and can change later if auto layout changes its mind due to changing conditions, such as rotation etc.)
There are two kinds of solution:
Host the shape layer in a view. Now you have something that is subject to autolayout. You will still need to redraw the shape layer whenever the view changes its frame, but you can detect that and perform the redraw.
Implement your view controller's viewDidLayoutSubviews to detect that auto layout has just done its work. Respond by (for example) removing the shape layer and making a new one based on the current conditions.

Turning a UIBezierPath into a mask?

Not sure if I am asking this question correctly, but I have two components; a CIImage and a UIBezierPath. Ideally, I want to create a CGRect that encapsulates my UIBezierPath; everything inside of the path would be white, everything outside of the path would be black. This way, I can then render this CGRect to some sort of an image, which I could then use as a mask for other purposes.
I am struggling to figure out how to do this with a focus on performance. My tests, as noted below, leverage using UIGraphicsImageRenderer which is far too slow for my needs (I will be doing this on sample buffers from a camera). Therefore, I would like to stick within CoreImage. This is my attempt;
// Path
let path = UIBezierPath()
// ... define the path's shape and close it
// My source image
let image = CIImage(cgImage: UIImage(named: "test.jpg")!.cgImage!)
// Renderer
let renderer = UIGraphicsImageRenderer(size: image.extent.size)
// Render path as mask
let img = renderer.image { ctx in
ctx.cgContext.setFillColor(UIColor.black.cgColor)
ctx.cgContext.fill(CGRect(x: 0, y: 0, width: image.extent.size.width, height: image.extent.size.height))
ctx.cgContext.setFillColor(UIColor.white.cgColor)
ctx.cgContext.addPath(path.cgPath)
ctx.cgContext.drawPath(using: .fill)
}
// Put a filter on the image
let imageFiltered = image.applyingFilter("CIPhotoEffectNoir")
// Blend with mask
let maskFilter = CIFilter.blendWithMask()
maskFilter.inputImage = imageFiltered
maskFilter.backgroundImage = image
maskFilter.maskImage = CIImage(cgImage: img.cgImage!)
// Output
if let output = maskFilter.outputImage {
... use CIContext() to render back to CVPixelBuffer for preview on MTKView.
}
Overall, the goal is to have a defined portion of an image (which will not conform to a traditional shape like a square or circle) which will be filtered with a CIFilter, then composited back over the original. If there is a better approach (such as somehow taking the original image, filtering it, cropping it to the path (leaving everything outside of the path transparent) and composing, that would likely be better performant.
To note, the above sample code results in a crash as the UIGraphicsImageRenderer cannot render the mask fast enough.
Your approach looks good so far. I assume the slow part is the generation of the mask image with Core Graphics. Unfortunately, there is no direct way to do the same with Core Image directly (on the GPU). However, you can try the following:
(Assuming from your previous question that the path always has a certain shape,) you can generate a mask image containing the path once for a certain reference size of your choice. Make sure that the path doesn't "touch" the border.
Then, when you want to use it as a mask, move and scale the shape image to the correct place using transformations and let its edges extend infinitely (to cover the whole underlying image; that's why the shape shouldn't touch the edges). Something like this:
let pathImage = CIImage(cgImage: img.cgImage!)
// scale path to the size of the area you want to mask
var mask = pathImage.transformed(by: CGAffineTransform(scaleX: scaleX, y: scaleY))
// move path to the place you want to cover
mask = mask.transformed(by: CGAffineTransform(translationX: offsetX, y: offsetY))
// let mask fill the rest of the area
mask = mask.clampedToExtent()
// use mask as maskImage...
You should be able to recycle the pathImage for every frame and thereby avoiding Core Graphics and CPU-GPU-synchronization.

How to prevent distorted images?

I have the problem that the images I add are distorted. I have created a pixel accurate background for the iPhone X at (1125 x 2436), so I don't have to use .aspectFill and .aspectFit because I want a screen without black borders.
I use the following code to create the images:
func animateDeck() {
let chip = SKSpriteNode(imageNamed: "Chip")
chip.position = CGPoint(x: 300, y: 400)
chip.zPosition = 2
chip.setScale(1)
gameScene2.addChild(chip)
print("test")
}
Is there a way to display the images in their correct size without using .aspectFit or .aspectFill?
now (left) and how it should be (right)
Thank you in advance!
Check out this project I just made to show you how to create a texture and apply it to a node. All you need should be in GameScene.swift.
Also, in your ViewController, make sure that your GameScene is initialised properly as shown in my project, or how you did it with this:
gameScene2 = GameScene(size: view, bounds: size)

SpriteKit: using SKView in UIView instead of initializing a Game project

Completely new to SpriteKit. Currently I have a UIView, and I want to add a sprite node to it (like a small UIImageView, but I want animation for it so using SpriteKit). Therefore I didn't initialize my project to be a game project, as found in almost all of tutorials for SpriteKit. I've found a note here: link and what I have now is sth like:
func initializeImage() {
let imageView = SKView()
imageView.frame = CGRect(x: self.frame.width / 2 - Constants.imageWidth / 2, y: self.frame.height - Constants.imageHeight, width: Constants.imageWidth, height: Constants.imageHeight)
// so place it somewhere in the bottom middle of the whole frame
let sheet = SpriteSheet(texture: ...)
let sprite = SKSpriteNode(texture: sheet.itemFor(column: 0, row: 0))
sprite.position = imageView.center //basically the same position as the imageView.frame's x and y value
let scene = SKScene(size: imageView.frame.size)
scene.backgroundColor = SKColor.clear
scene.addChild(sprite)
imageView.presentScene(scene)
self.frame.addSubview(imageView)
}
The SpriteSheet is similar to this: sprite sheet; it's essentially cutting an image atlas and divide it into smaller images. I tracked the process and this step is indeed giving the smaller image (the var 'sprite'). But if running I only have a black square now (should be the size as defined by Constants). If I set scene.backgroundColor to be white then it's white. May I know how I should proceed from here, as how should I make the sprite showing up?
All of your code looks good except for this:
sprite.position = imageView.center // basically the same position as the imageView.frame's x and y value
That is basically not the position you think it is. The coordinate system in SpriteKit is a) relative to the (SK)scene, not to whatever view the SKView is contained in, and b) flipped vertically relative to the UIKit coordinate system. If you want a sprite centered in the scene, you probably want to set its position based on the scene's size:
sprite.position = CGPoint(x: scene.size.width / 2, y: scene.size.height / 2)
By the way, the external SpriteSheet code might not be needed (and you're more likely to benefit from Apple's optimizations) if you slice up your sprite sheet and put it in an Xcode asset catalog.

How to edit high resolution images with Core Graphics?

I'm trying to draw a path on a high resolution image, that's nothing complicated for an iPhone but if I add shadow to my path everything lags. It lags only when I work on images with a certain resolution (2000 x 3000) even less.
The Storyboard vies are:
-Scroll View
-Image View
-Draw View
So I have the DrawingView on top of the ImageView when I need to draw.
So the ImageView and the DrawView (view.bounds.size) have the same resolution as the image (e.g. 2000 x 3000) (there's the problem).
I'm drawing on a view with a high resolution.
I'm not directly calling drawRect: but only calling setNeedsDisplay() inside touchesBegan() and touchesMoved() after doing some operations (path.moveToPoint, path.addCurveToPoint, array operations) and adding points to my array.
In drawRect: I essentially have:
override func drawRect(rect: CGRect) {
print(self.bounds.size)
UIColor.greenColor().setStroke()
path.lineCapStyle = .Round
path.lineJoinStyle = .Round
path.lineWidth = 60.0
context = UIGraphicsGetCurrentContext()!
CGContextAddPath(context, path.CGPath)
CGContextSetShadowWithColor(context, CGSizeZero, 14.0, UIColor.whiteColor().CGColor) // <-- with this shadow it lags a lot.
path.stroke()
}
My path is a UIBezierPath().
Any ideas to improve the speed?
Update:
I followed what #brimstone said. I now have ImageView with a lower resolution, but have to apply my drawn path to the high resolution image.
(I'm trying to hand crop an image with the path that the user draws)
In this code I already got my closed path:
let layer = CAShapeLayer()
layer.path = path.CGPath
self.imageToEditView.layer.mask = layer
UIGraphicsBeginImageContext(self.imageEdited.size)
self.imageToEditView.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let croppedCGImage = CGImageCreateWithImageInRect(image.CGImage!, CGPathGetBoundingBox(path.CGPath))
let croppedImage = UIImage(CGImage: croppedCGImage!)
self.imageToEditView.image = croppedImage
self.imageToEditView.layer.mask = nil
imageToEditView.bounds.size = low resolution
imageEdited.size = high resolution
I need to set the hight resolution (I think) when i renderInContext. But how can I change the resolution of the imageView now?
Try downsizing it for the user to draw over (doesn't make a huge difference on small iPhone screens for user experience), then apply the edits to the high-res image.
To downsize images, either use UIImagePNGRepresentation, which may make your image sufficiently smaller, or (if you're still having memory issues), try using techniques in this tutorial and this answer to make it even smaller.
Then, you can take the content of what they've drawn and apply it to the high-res image.
Alternatively, look at high-res optimisation techniques by Apple: https://developer.apple.com/library/ios/documentation/2DDrawing/Conceptual/DrawingPrintingiOS/SupportingHiResScreensInViews/SupportingHiResScreensInViews.html