How to rescale PKDrawing in SwiftUI on screen rotation? - swift

Im currently making an app usig PencilKit and SwiftUI. If the screen is rotated, the canvas (and the drawing) should be rescaled acordingly. I do this with the Geometry reader and calculating the size.
NavigationView {
GeometryReader { g in
HStack {
CanvasView(canvasView: $canvasView)
.frame(width: g.size.width/1.5, height: (g.size.width/1.5)/1.5)
placeholder(placeholder: scaleDrawing(canvasHeight: (g.size.width/1.5)/1.5))
}
}
}
func scaleDrawing(canvasHeight : CGFloat) -> Bool {
let factor = canvasHeight / lastCanvasHeight
let transform = CGAffineTransform(scaleX: factor, y: factor)
canvasView.drawing = canvasView.drawing.transformed(using: transform)
lastCanvasHeight = canvasHeight
return true
}
The drawing however gets not scaled.
My solution was to create a placeholder view which has a boolean as a parameter. Also a method that scales the drawing with the height of the Canvas as an input.
This works but i think its pretty hacky and not the best solution, but this was the only solution that worked.
Is there a better way to do this? What am i missing?

Related

Set pencilkit canvas to be the input image size and prevent drawing outside that region Swiftui

How to prevent drawing outside the scaled aspect ratio image background?
I need to draw on images, similar to markup, and based on what I've seen, you have to add an image to the background of a PKCanvasView.
In the SO below, I more or less did the same. However, my device and image sizes aren't identical.
PencilKit Code similar to this one
struct MyCanvas: UIViewRepresentable {
var canvasView: PKCanvasView
let picker = PKToolPicker.init()
func makeUIView(context: Context) -> PKCanvasView {
self.canvasView.tool = PKInkingTool(.pen, color: .black, width: 15)
self.canvasView.isOpaque = false
self.canvasView.backgroundColor = UIColor.clear
self.canvasView.becomeFirstResponder()
let imageView = Image("badmintoncourt")
imageView.contentMode = .scaleAspectFit
imageView.clipsToBounds = true
let subView = self.canvasView.subviews[0]
subView.addSubview(imageView)
subView.sendSubviewToBack(imageView)
return canvasView
}
func updateUIView(_ uiView: PKCanvasView, context: Context) {
picker.addObserver(canvasView)
picker.setVisible(true, forFirstResponder: uiView)
}
}
My screen size is : (1194.0, 790.0), whlist the image size is (1668.0, 2348.0). Despite the image view having an aspect ratio fit, it doesn't automatically scale.
To fix this I got used Geometry reader and passed in the screen width/height, then set the ImageView's frame, this now scaled images accordingly.
The problem is that I can draw outside the image bounds, with a vertical image, viewed horizontally, there's plenty of white space to the left and right of the image. I don't want to be able to draw there.
Is there anyway of setting the drawable canvas to the same as the scaled image?
Another problem would be when I change device orientation, the drawings don't stick to the image like using Apple's "Markup" does. I read a bit and seems like anchors might work? Unsure how to use them though.
What worked for me was placing the PKCanvasView into another UIView then setting the PKCanvasView frame to the scaled image dimensions essentially clipping the view.
I know this question is old , but anyway this is how I did it
Image("My Image")
.resizable()
.scaledToFit()
.clipped()
.overlay {
CanvasView(canvasView: $canvasView, onSaved: saveDrawing)
}
.padding(20.0)

SwiftUI .rotationEffect() framing and offsetting

When applying .rotationEffect() to a Text, it rotates the text as expected, but its frame remains unchanged. This becomes an issue when stacking rotated views with non-rotated views, such as with a VStack of HStack, causing them to overlap.
I initially thought the rotationEffect would simply update the frame of the Text to be vertical, but this is not the case.
I've tried manually setting the frame size and (if needed, offsetting) the Text, which sort of works, but I don't like this solution because it requires some guessing and checking of where the Text will appear, how big to make the frame, etc.
Is this just how rotated text is done, or is there a more elegant solution to this?
struct TextAloneView: View {
var body: some View {
VStack {
Text("Horizontal text")
Text("Vertical text").rotationEffect(.degrees(-90))
}
}
}
Overlapping Text
You need to adjust the frame yourself in this case. That requires capturing what the frame is, and then applying the adjustment.
First, to capture the existing frame, create a preference, which is a system for passing data from child views to their parents:
private struct SizeKey: PreferenceKey {
static let defaultValue: CGSize = .zero
static func reduce(value: inout CGSize, nextValue: () -> CGSize) {
value = nextValue()
}
}
extension View {
func captureSize(in binding: Binding<CGSize>) -> some View {
overlay(GeometryReader { proxy in
Color.clear.preference(key: SizeKey.self, value: proxy.size)
})
.onPreferenceChange(SizeKey.self) { size in binding.wrappedValue = size }
}
}
This creates a new .captureSize(in: $binding) method on Views.
Using that, we can create a new kind of View that rotates its frame:
struct Rotated<Rotated: View>: View {
var view: Rotated
var angle: Angle
init(_ view: Rotated, angle: Angle = .degrees(-90)) {
self.view = view
self.angle = angle
}
#State private var size: CGSize = .zero
var body: some View {
// Rotate the frame, and compute the smallest integral frame that contains it
let newFrame = CGRect(origin: .zero, size: size)
.offsetBy(dx: -size.width/2, dy: -size.height/2)
.applying(.init(rotationAngle: CGFloat(angle.radians)))
.integral
return view
.fixedSize() // Don't change the view's ideal frame
.captureSize(in: $size) // Capture the size of the view's ideal frame
.rotationEffect(angle) // Rotate the view
.frame(width: newFrame.width, // And apply the new frame
height: newFrame.height)
}
}
And for convenience, an extension to apply it:
extension View {
func rotated(_ angle: Angle = .degrees(-90)) -> some View {
Rotated(self, angle: angle)
}
}
And now your code should work as you expect:
struct TextAloneView: View {
var body: some View {
VStack {
Text("Horizontal text")
Text("Vertical text").rotated()
}
}
}
RotationEffect takes a second argument which is the anchor point, if you omit it - the default is .center.
Try this instead:
.rotationEffect(.degrees(-90), anchor: .bottomTrailing)

Consistent curves with dynamic corner radius (Swift)?

Is there a way to make the corner radius of a UIView adept to the view it belongs to? I'm not comfortable with the idea of hard-coding corner radius values, because as soon as the width or the height of your view changes (on different screen orientations for example), the corners will look totally different. For example, take a look at WhatsApp's chat window.
As you can see, every message container view has a different width and a different height, but the curve of the corners are all exactly the same. This is what I'm trying to achieve. I want the curves of my corners to be the same on every view, no matter what the size of the view is or what screen the view is displayed on. I've tried setting the corner radius relative to the view's height (view.layer.cornerRadius = view.frame.size.height * 0.25) and I've also tried setting it to the view's width, but this doesn't work. The corners still look weird as soon as they are displayed on a different screen size. Please let me know if there's a certain formula or trick to make the curves look the same on every view/screen size.
Here's the best I can do. I don't know if this will be of help, but hopefully it will give you some ideas.
First the code:
class ViewController: UIViewController {
let cornerRadius:CGFloat = 10
let insetValue:CGFloat = 10
var numberOfViews:Int = 0
var myViews = [UIView]()
override func viewDidLoad() {
super.viewDidLoad()
view.translatesAutoresizingMaskIntoConstraints = false
}
override func viewWillLayoutSubviews() {
setNumberOfViews()
createViews()
createViewHierarchy()
addConstraints()
}
func setNumberOfViews() {
var smallerDimension:CGFloat = 0
if view.frame.height < view.frame.width {
smallerDimension = view.frame.height
} else {
smallerDimension = view.frame.width
}
let viewCount = smallerDimension / (insetValue * 2)
numberOfViews = Int(viewCount)
}
func createViews() {
for i in 1...numberOfViews {
switch i % 5 {
case 0:
myViews.append(MyView(UIColor.black, cornerRadius))
case 1:
myViews.append(MyView(UIColor.blue, cornerRadius))
case 2:
myViews.append(MyView(UIColor.red, cornerRadius))
case 3:
myViews.append(MyView(UIColor.yellow, cornerRadius))
case 4:
myViews.append(MyView(UIColor.green, cornerRadius))
default:
break
}
}
}
func createViewHierarchy() {
view.addSubview(myViews[0])
for i in 1...myViews.count-1 {
myViews[i-1].addSubview(myViews[i])
}
}
func addConstraints() {
for view in myViews {
view.topAnchor.constraint(equalTo: (view.superview?.topAnchor)!, constant: insetValue).isActive = true
view.leadingAnchor.constraint(equalTo: (view.superview?.leadingAnchor)!, constant: insetValue).isActive = true
view.trailingAnchor.constraint(equalTo: (view.superview?.trailingAnchor)!, constant: -insetValue).isActive = true
view.bottomAnchor.constraint(equalTo: (view.superview?.bottomAnchor)!, constant: -insetValue).isActive = true
}
}
}
class MyView: UIView {
convenience init(_ backgroundColor:UIColor, _ cornerRadius:CGFloat) {
self.init(frame: CGRect.zero)
self.translatesAutoresizingMaskIntoConstraints = false
self.backgroundColor = backgroundColor
self.layer.cornerRadius = cornerRadius
}
}
Explanation:
This is fairly simple code. The intent was to create as deeply nested a view hierarchy as possible, and, using auto layout, have two main variables: cornerRadius (the view's corner radius) and insetValue (the "frame's" inset). These two variables can be adjusted for experimenting.
The bulk of the logic is in viewWillLayoutSubviews, where the root view frame size is know. Since I'm using 5 different background colors, I'm calculating how many views can fit in the hierarchy. Then I'm creating them, followed by creating the view hierarchy, and finally I'm adding the constraints.
Experimenting and conclusions:
I was able to see what your concern is - yes, if a view's size components are smaller than the corner radius, you end up with inconsistent looking corners. But these values are pretty small - pretty much 10 or less. Most views are unusable at that size. (If I recall even the HIG suggests that a button should be no less than 40 points in size. Sure, even Apple breaks that rule. Still.)
If your 'insetValueis sufficiently larger than the corner radius, you should never have an issue. Likewise, using the iMessage scenario, a singleUILabelcontaining text and/or emoticons should have enough height that a noticeablecornerRadius` can be had.
The key point to set things like cornerRadius and insetValue is in viewWillLayoutSubviews, when you can decide (1) which is the smaller dimension, height or width, (2) how deeply you can nest views, and (3) how large of a corner radius you can set.
Use auto layout! Please note the absolute lack of frames. Other than determining the root view's dimensions at the appropriate time, you can write very compact code without worrying about device size or orientation.

I'm having some trouble using x and y coordinates from touchesBegan as the center key in a CI filter

I'm trying to setup having the users tap a location in an image view and the X,Y of the tap becomes the center point (kCIInputCenterKey) of the current image filter in use.
These are my global variables:
var x: CGFloat = 0
var y: CGFloat = 0
var imgChecker = 0
This is my touchesBegan function that checks if the user is touching inside the image view or not, if not then sets the filter center key to the center of the image view:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let position = touch.location(in: self.imageView)
if (touch.view == imageView){
print("touchesBegan | This is an ImageView")
x = position.x * 4
y = position.y * 4
imgChecker = 1
}else{
print("touchesBegan | This is not an ImageView")
x = 0
y = 0
imgChecker = 0
}
print("x: \(x)")
print("y: \(y)")
}
}
As you can see I have the checker there to make the filter center appear in the middle of the image if inside the image view was not tapped. I'm also printing out the coordinates tapped to xCode's console and they appear without issue.
This is the part where i apply my filter:
currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter.setValue(200, forKey: kCIInputRadiusKey)
currentFilter.setValue(1, forKey: kCIInputScaleKey)
if imgChecker == 1 {
self.currentFilter.setValue(CIVector(x: self.x, y: self.y), forKey: kCIInputCenterKey)
}else{
self.currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey)
}
x = 0
y = 0
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
let cgimg = context.createCGImage(currentFilter.outputImage!, from: currentFilter.outputImage!.extent)
currentImage = UIImage(cgImage: cgimg!)
self.imageView.image = currentImage
This is the CGRect I'm using, ignore the "frame" in there, its just a image view in front of the first one that allows me to save a "frame" over the current filtered image:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
When I do set the x,y by tapping inside the image view, the center of the filter in the image view keeps appearing in the lower left hand side of it regardless of where I tapped inside. If i keep tapping around the image view, the center does seem to move around a bit, but its no where near where I'm actually tapping.
any insight would be greatly appreciated, thank you.
Keep two things in mind.
First (and I think you probably know this), the CI origin (0,0) is lower left, not top left.
Second (and I think this is the issue) UIKit (meaning UIImage and potentially CGPoint coordinates) are not the same as CIVector coordinates. You need to take the UIKit touchesBegan coordinate and turn it into the CIImage.extent coordinate.
EDIT:
All coordinates that follow are X then Y, and Width then Height.
After posting my comment I thought I'd give an example of what I mean by scaling. Let's say you have a UIImageView sized at 250x250, using a content mode of AspectFit, displaying an image whose size is 1000x500.
Now, let's say the touchesBegan is CGPoint(200,100). (NOTE: If your UIImageView is part of a larger superview, it could be something more like 250,400 - I'm working on the point within the UIImageView.)
Scaling down the image size (remember, AspectFit) means the image is actually centered vertically (landscape appearing) within the UIImageView at CGRect(0, 62.5, 250, 125). So first off, good! The touch point not only began within the image view, it also began wishing the image. (You'll probably want to consider the not-so-edge case of touches beginning outside of the image.)
Dividing by 4 gives you the scaled down image view coordinates, and as you'd expect, multiplying up will give you the needed vector coordinates. So a touchesBegan CGPoint(200,100) turns into a CIVector(800,400).
I have some code written - not much in the way of comments, done in Swift 2 (I think) and very poorly written - that is part of a subclass (probably should have been an extension) of UIImageView that computes all this. Using the UIImageView's bounds and it's image's size is what you need. Keep in mind - images in AspectFit can also be scaled up!
One last note on CIImage - extent. Many times it's a UIImage's size. But many masks and generated output may have an infinite eatent.
SECOND EDIT:
I made a stupid mistake in my scaling example. Remember, the CIImage Origin is bottom left, not upper left. So in my example a CGPoint(200,100), scaled to CGPoint(800,400) would be CGVector(800,100).
THIRD EDIT:
Apologies for the multiple/running edits, but it seems important. (Besides, only the last was due my stupidity! Worthwhile, to note, but still.)
Now we're talking "near real time" updating using a Core Image filter. I'm planning to eventually have some blog posts on this, but the real source you want is Simon Gladman (he's moved on, look back to his posts in 2015-16), and his eBook Core Image for Swift (uses Swift 2 but most is automatically upgraded to Swift 3). Just giving credit where it is due.
If you want "near real time" usage of Core Image, you need to use the GPU. UIView, and all it's subclasses (meaning UIKit) uses the CPU. That's okay, using the GPU means using a Core Graphics, and specifically using a GLKView. It's the CG equivalent of a UIImage.
Here's my subclass of it:
open class GLKViewDFD: GLKView {
var renderContext: CIContext
var myClearColor:UIColor!
var rgb:(Int?,Int?,Int?)!
open var image: CIImage! {
didSet {
setNeedsDisplay()
}
}
public var clearColor: UIColor! {
didSet {
myClearColor = clearColor
}
}
public init() {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(frame: CGRect.zero)
context = eaglContext!
}
override public init(frame: CGRect, context: EAGLContext) {
renderContext = CIContext(eaglContext: context)
super.init(frame: frame, context: context)
enableSetNeedsDisplay = true
}
public required init?(coder aDecoder: NSCoder) {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(coder: aDecoder)
context = eaglContext!
enableSetNeedsDisplay = true
}
override open func draw(_ rect: CGRect) {
if let image = image {
let imageSize = image.extent.size
var drawFrame = CGRect(x: 0, y: 0, width: CGFloat(drawableWidth), height: CGFloat(drawableHeight))
let imageAR = imageSize.width / imageSize.height
let viewAR = drawFrame.width / drawFrame.height
if imageAR > viewAR {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / imageAR) / 2.0
drawFrame.size.height = drawFrame.width / imageAR
} else {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * imageAR) / 2.0
drawFrame.size.width = drawFrame.height * imageAR
}
rgb = (0,0,0)
rgb = myClearColor.rgb()
glClearColor(Float(rgb.0!)/256.0, Float(rgb.1!)/256.0, Float(rgb.2!)/256.0, 0.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(image, in: drawFrame, from: image.extent)
}
}
}
A few notes.
I absolutely need to credit Objc.io for much of this. This is also a great resource for Swift and UIKit coding.
I wanted AspectFit content mode with the potential to change the "backgroundColor" of the GLKView, which is why I subclassed and and called if clearColor.
Between the two resources I linked to, you should have what you need to have a good performing, near real time use of Core Image, using the GPU. One reason my afore-mentioned code to use scaling after getting the output of a filter was never updated? It didn't need it.
Lots here to process, I know. But I've found this side of things (Core Image effects) to be the most fun side (and pretty cool too) of iOS.

AVCaptureVideoPreviewLayer and preview from camera position

I'm developing an app that permits user to takes photo.
I've started using AVCam apple provides but i'm actually have a problem
Simply i cannot position the camera layer where i want but it's positioned automatically on center of the View
On the left side you can see what i actually have, on the right side what i'd like to have.
The View that contains the preview that comes from the camera is a UIView subclass and this is the code
class AVPreviewView : UIView {
override class func layerClass() -> AnyClass {
return AVCaptureVideoPreviewLayer.self
}
func session () -> AVCaptureSession {
return (self.layer as AVCaptureVideoPreviewLayer).session
}
func setSession(session : AVCaptureSession) -> Void {
(self.layer as AVCaptureVideoPreviewLayer).session = session;
(self.layer as AVCaptureVideoPreviewLayer).videoGravity = AVLayerVideoGravityResizeAspect;
}
}
Any help is appreciated
First get your screen size so you can calculate the aspect ratio
let screenWidth = UIScreen.mainScreen().bounds.size.width
let screenHeight = UIScreen.mainScreen().bounds.size.height
var aspectRatio: CGFloat = 1.0
var viewFinderHeight: CGFloat = 0.0
var viewFinderWidth: CGFloat = 0.0
var viewFinderMarginLeft: CGFloat = 0.0
var viewFinderMarginTop: CGFlaot = 0.0
Now calculate the size of the preview layer.
func setSession(session : AVCaptureSession) -> Void {
if screenWidth > screenHeight {
aspectRatio = screenHeight / screenWidth * aspectRatio
viewFinderWidth = self.bounds.width
viewFinderHeight = self.bounds.height * aspectRatio
viewFinderMarginTop *= aspectRatio
} else {
aspectRatio = screenWidth / screenHeight
viewFinderWidth = self.bounds.width * aspectRatio
viewFinderHeight = self.bounds.height
viewFinderMarginLeft *= aspectRatio
}
(self.layer as AVCaptureVideoPreviewLayer).session = session;
Set the layer's videoGravity to AVLayerVideoGravityResizeAspectFill so that the layer stretches to fill given your custom view.
(self.layer as AVCaptureVideoPreviewLayer).videoGravity = AVLayerVideoGravityResizeAspectFill;
Finally, set the frame of your preview layer to the values calculated above with any offset that you like.
(self.layer as AVCaptureVideoPreviewLayer).frame = CGRectMake(viewFinderMarginLeft, viewFinderMarginTop, viewFinderWidth, viewFinderHeight)
}
This may take some tweaking since I haven't tested it live, but you should be able to create a more flexible VideoPreviewArea delimited by the bounds of your APPreviewView.
What you're seeing isn't the layer being positioned as much as it is the aspect of the view/layer doesn't match the aspect ratio of the camera, so it's using the videoGravity property and aspect filling it (which always implies centered)
When you create the layer, size it so that the aspect ratio is correct, then position it at will. Or, in this case, resize the view to the correct aspect ratio, then the view can be positioned at will.
I had a similar problem I fix it doing this:
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
previewLayer.anchorPoint = videoLayer.bounds.origin
previewLayer.frame = CGRect(x: videoLayer.bounds.origin.x, y: videoLayer.bounds.origin.y, width: videoLayer.frame.size.width, height: videoLayer.frame.size.height)
videoLayer.layer.addSublayer(previewLayer)
captureSession.startRunning()
Hope this can help :)
I ran into this problem and the code provided did not fix the situation even though the code was working. I have been building my app using the interface builder.
In the the attributes inspector there under extend edges there are settings which all the view to be extended under tops bars and under bottom bars. My settings had these checked which was throwing off calculations for the positioning of the view.
Extend edges settings: