Image Cropping grabbing the wrong portion of UIImage during crop - swift

I've been working on making a view controller that will crop an image down to a specific size with some draggable control points and the background image outside of the crop zone dimmed.
For some reason whenever the image is cropped, it is grabbing the wrong reference. I've looked at just about every other post on this to deal with cropping.
Here is my setup for the Storyboard:
I've asked a few other people including a tutor and mentor from a course that I'm taking, but we all seem to be stumped.
I can select a frame by dragging the UL UR DL DR corners around the view controller like this:
But when I press the button and use the crop function I've written, I get something that is not the correct crop based on the framed selection.
I also get this error message during the cropping proceedure:
2016-09-07 23:36:38.962 ImageCropView[33133:1056024]
<UIView: 0x7f9cfa42c730; frame = (0 0; 414 736); autoresize = W+H; layer = <CALayer: 0x7f9cfa408400>>'s window
is not equal to <ImageCropView.CroppedImageViewController: 0x7f9cfa43f9b0>'s view's window!
The offending part of the code must be somewhere in one of the functions below.
Here is the cropping function:
func cropImage(image: UIImage, toRect rect: CGRect) -> UIImage {
func rad(deg: CGFloat) -> CGFloat {
return deg / 180.0 * CGFloat(M_PI)
}
// determine the orientation of the image and apply a transformation to the crop rectangle to shift it to the correct position
var rectTransform: CGAffineTransform
switch image.imageOrientation {
case .Left:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(90)), 0, -image.size.height)
case .Right:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-90)), -image.size.width, 0)
case .Down:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-180)), -image.size.width, -image.size.height)
default:
rectTransform = CGAffineTransformIdentity
}
// adjust the transformation scale based on the image scale
rectTransform = CGAffineTransformScale(rectTransform, UIScreen.mainScreen().scale, UIScreen.mainScreen().scale)
// apply the transformation to the rect to create a new, shifted rect
let transformedCropSquare = CGRectApplyAffineTransform(rect, rectTransform)
// use the rect to crop the image
let imageRef = CGImageCreateWithImageInRect(image.CGImage, transformedCropSquare)
// create a new UIImage and set the scale and orientation appropriately
let result = UIImage(CGImage: imageRef!, scale: image.scale, orientation: image.imageOrientation)
return result
}
Here are the functions to set and translate the mask view
func setTopMask(){
let path = CGPathCreateWithRect(cropViewMask.frame, nil)
topMaskLayer.path = path
topImageView.layer.mask = topMaskLayer
}
func translateMask(sender: UIPanGestureRecognizer) {
let translation = sender.translationInView(self.view)
sender.view!.center = CGPointMake(sender.view!.center.x + translation.x, sender.view!.center.y + translation.y)
// print(sender.translationInView(self.view))
sender.setTranslation(CGPointZero, inView: self.view)
// print("panned mask")
if sender.state == .Ended {
printFrames()
}
}
func setCropMaskFrame() {
let x = ulCorner.center.x
let y = ulCorner.center.y
let width = urCorner.center.x - ulCorner.center.x
let height = blCorner.center.y - ulCorner.center.y
cropViewMask.frame = CGRectMake(x, y, width, height)
setTopMask()
}

I know this was long time ago...Just a thought, I ran into similar problem and what I found is that the frames for cropping are most probably correct. The problem lies in the actual size of the picture you're trying to crop. I solved the issue by aligning sizes of my view which holds the picture, with the actual picture size (in points). Then the cropping area cropped what was selected. I know this is probably not a solution, just sharing my experience, hope it helps to turn on some lightbulbs :)

Related

How do you move the uiimage inside the uiimageview to the right while retaining the aspect ratio of the original image?

I want to move an image to the right when a user imports it using uiimagepicker but when I set content mode = .right this occurs: The image enlarges for some reason and it looks like it moves to the left
Is there any way to keep the aspect ratio of the uiimageview and the aspect ratio of the imported image, and while also moving it to the right inside the image view.
This is how I want it to be
Here is one approach: custom view, using a sublayer with the content set to the image...
add a CALayer as a sublayer
calculate the aspect-scaled rectangle for the image inside the view's bounds
set the image layer's frame to that scaled rect
then set the layer's origin based on the desired alignment
A simple example:
class AspectAlignImageView: UIView {
enum AspectAlign {
case top, left, right, bottom, center
}
// this is an array so we can set two options
// if, for example, we don't know if the image will be
// taller or narrower
// for example:
// [.top, .right] will put a
// wide image aligned top
// narrow image aligned right
public var alignment: [AspectAlign] = [.center]
public var image: UIImage?
private let imgLayer: CALayer = CALayer()
override func layoutSubviews() {
super.layoutSubviews()
// make sure we have an image
if let img = image {
// only add the sublayer once
if imgLayer.superlayer == nil {
layer.addSublayer(imgLayer)
}
imgLayer.contentsGravity = .resize
imgLayer.contents = img.cgImage
// calculate the aspect-scaled rect inside our bounds
var scaledImageRect = CGRect.zero
let aspectWidth:CGFloat = bounds.width / img.size.width
let aspectHeight:CGFloat = bounds.height / img.size.height
let aspectRatio:CGFloat = min(aspectWidth, aspectHeight)
scaledImageRect.size.width = img.size.width * aspectRatio
scaledImageRect.size.height = img.size.height * aspectRatio
// set image layer frame to aspect-scaled rect
imgLayer.frame = scaledImageRect
// align as specified
if alignment.contains(.top) {
imgLayer.frame.origin.y = 0
}
if alignment.contains(.left) {
imgLayer.frame.origin.x = 0
}
if alignment.contains(.bottom) {
imgLayer.frame.origin.y = bounds.maxY - scaledImageRect.height
}
if alignment.contains(.right) {
imgLayer.frame.origin.x = bounds.maxX - scaledImageRect.width
}
}
}
}
class TestAlignViewController: UIViewController {
let testView = AspectAlignImageView()
override func viewDidLoad() {
super.viewDidLoad()
testView.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(testView)
NSLayoutConstraint.activate([
// constrain test view 240x240 square
testView.widthAnchor.constraint(equalToConstant: 240.0),
testView.heightAnchor.constraint(equalTo: testView.widthAnchor),
// centered in view
testView.centerXAnchor.constraint(equalTo: view.centerXAnchor),
testView.centerYAnchor.constraint(equalTo: view.centerYAnchor),
])
if let img = UIImage(named: "bottle") {
testView.image = img
}
testView.alignment = [.right]
// so we can see the actual view frame
testView.backgroundColor = .green
}
}
Using this image:
in a 240x240 view (view background set green so we can see its frame), we get this result:
Set your UIImage Content mode to aspect fill or aspect fit. Then use auto layout.

Rotate UIImageView inside UIScrollView in Swift

I'm working in a basic photo editor which is supposed to zoom, rotate and flip a photo. I'm using an image view (aspect fill) inside a scroll view which allows me to zoom easily. But when I try to rotate or flip the result is not what I would expect. The image view keeps the original frame and seems like rotating the image. The scroll view zoom scale changes. Any suggestions on how to do this?
It also would be great to have suggestions about setting the image view anchor point to match the scroll view anchor point before transforming cause I don't want to display a different portion of the image after transforming, just the same portion of the image, but rotated.
View stack before transform:
View stack after applying rotation:
My code so far:
override func viewDidLoad() {
super.viewDidLoad()
scrollView.delegate = self
setZoomScale()
scrollView.zoomScale = scrollView.minimumZoomScale
}
#IBAction func rotateAnticlockwise(_ sender: UIButton) {
rotationAngle -= 0.5
transformImage()
}
func transformImage(){
var transform = CGAffineTransform.identity
transform = transform.rotated(by: .pi * rotationAngle)
imageView.transform = transform
}
func setZoomScale(){
let imageSize = imageView.image!.size
let smallestDimension = min(imageSize.width, imageSize.height)
scrollView.minimumZoomScale = scrollView.bounds.width / smallestDimension
scrollView.maximumZoomScale = smallestDimension / scrollView.bounds.width
}
I think you are looking for, e.g. :
imageView.transform = CGAffineTransform(rotationAngle: 0.5)

Images being flipped when adding to NSAttributedString

I have a strange problem when resizing an image that's in a NSAttributedString. The resizing extension is working fine, but when the image is added to the NSAttributedString, it gets flipped vertically for some reason.
This is the resizing extension:
extension NSImage {
func resize(containerWidth: CGFloat) -> NSImage {
var scale : CGFloat = 1.0
let currentWidth = self.size.width
let currentHeight = self.size.height
if currentWidth > containerWidth {
scale = (containerWidth * 0.9) / currentWidth
}
let newWidth = currentWidth * scale
let newHeight = currentHeight * scale
self.size = NSSize(width: newWidth, height: newHeight)
return self
}
}
And here is the enumeration over the images in the attributed string:
newAttributedString.enumerateAttribute(NSAttributedStringKey.attachment, in: NSMakeRange(0, newAttributedString.length), options: []) { value, range, stop in
if let attachement = value as? NSTextAttachment {
let image = attachement.image(forBounds: attachement.bounds, textContainer: NSTextContainer(), characterIndex: range.location)!
let newImage = image.resize(containerWidth: markdown.bounds.width)
let newAttribute = NSTextAttachment()
newAttribute.image = newImage
newAttributedString.addAttribute(NSAttributedStringKey.attachment, value: newAttribute, range: range)
}
}
I've set breakpoints and inspected the images, and they are all in the correct rotation, except when it reaches this line:
newAttributedString.addAttribute(NSAttributedStringKey.attachment, value: newAttribute, range: range)
where the image gets flipped vertically.
I have no clue what could be causing this vertical flip. Is there a way to fix this?
If you look at the developer docs for NSTextAttachment:
https://developer.apple.com/documentation/uikit/nstextattachment
The bounds parameter is defined as follows:
“Defines the layout bounds of the receiver's graphical representation in the text coordinate system.”
I know that when using CoreText to layout text, you need to flip the coordinates, so I should imagine you need to transform your bounds parameter with a vertical reflection too.
Hope that helps.
I figured it out and it was so much simpler than I was making it.
Because the image was in a NSAttribuetdString being appended into a NSTextView I didn't need to resize each image in the NSAttributedString, rather I just had to set the attachment scaling inside the NSTextView with
markdown.layoutManager?.defaultAttachmentScaling = NSImageScaling.scaleProportionallyDown
One line is all it took

I need help integrating a specific UIImage resizing extension into my current draw CGRect function

I found this extension online, it allows me to have images adhere to aspect fit/fill even when drawn inside dynamically growing/shrinking image views (currently when image is saved to camera roll after my draw function the image reverts to "scale fill" regardless of what the content mode of the image view is. I suspect the reasoning for this is because I have it drawing the image to size/bounds of the image view, but since the image view is dynamic, i don't see any way around this without using this extension):
// MARK: - Image Scaling.
extension UIImage {
/// Scales an image to fit within a bounds with a size governed by the passed size. Also keeps the aspect ratio.
/// Switch MIN to MAX for aspect fill instead of fit.
///
/// - parameter newSize: newSize the size of the bounds the image must fit within.
///
/// - returns: a new scaled image.
func scaleImageToSize(newSize: CGSize) -> UIImage {
var scaledImageRect = CGRect.zero
let aspectWidth = newSize.width/size.width
let aspectheight = newSize.height/size.height
let aspectRatio = max(aspectWidth, aspectheight)
scaledImageRect.size.width = size.width * aspectRatio;
scaledImageRect.size.height = size.height * aspectRatio;
scaledImageRect.origin.x = (newSize.width - scaledImageRect.size.width) / 2.0;
scaledImageRect.origin.y = (newSize.height - scaledImageRect.size.height) / 2.0;
UIGraphicsBeginImageContext(newSize)
draw(in: scaledImageRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage!
}
}
This is my current function I'm using for drawing the image on screen to be able to save it to camera roll (this function combines two images, a frame and an image from camera roll:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
// var newSize = currentImage.scaleImageToSize
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
All the tutorials I've found on how to use extensions don't cover how to pass in and out variables like this one requires. Any insight would be greatly appreciated.
I understand that you don't know how to use the extension, is that correct? Since it just adds a function to every UIImage, you can simply call it on your image like this: currentImage.scaleImageToSize(newSize: someSize) and pass the size you want the image to fit into.
Dorian Roy was telling me to use that call in place of using just "currentImage", and that's what worked!
(I commented on his initial answer saying I was having issues because I was trying to use the return value from the extension itself in place of "currentImage")

I'm having some trouble using x and y coordinates from touchesBegan as the center key in a CI filter

I'm trying to setup having the users tap a location in an image view and the X,Y of the tap becomes the center point (kCIInputCenterKey) of the current image filter in use.
These are my global variables:
var x: CGFloat = 0
var y: CGFloat = 0
var imgChecker = 0
This is my touchesBegan function that checks if the user is touching inside the image view or not, if not then sets the filter center key to the center of the image view:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let position = touch.location(in: self.imageView)
if (touch.view == imageView){
print("touchesBegan | This is an ImageView")
x = position.x * 4
y = position.y * 4
imgChecker = 1
}else{
print("touchesBegan | This is not an ImageView")
x = 0
y = 0
imgChecker = 0
}
print("x: \(x)")
print("y: \(y)")
}
}
As you can see I have the checker there to make the filter center appear in the middle of the image if inside the image view was not tapped. I'm also printing out the coordinates tapped to xCode's console and they appear without issue.
This is the part where i apply my filter:
currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter.setValue(200, forKey: kCIInputRadiusKey)
currentFilter.setValue(1, forKey: kCIInputScaleKey)
if imgChecker == 1 {
self.currentFilter.setValue(CIVector(x: self.x, y: self.y), forKey: kCIInputCenterKey)
}else{
self.currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey)
}
x = 0
y = 0
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
let cgimg = context.createCGImage(currentFilter.outputImage!, from: currentFilter.outputImage!.extent)
currentImage = UIImage(cgImage: cgimg!)
self.imageView.image = currentImage
This is the CGRect I'm using, ignore the "frame" in there, its just a image view in front of the first one that allows me to save a "frame" over the current filtered image:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
When I do set the x,y by tapping inside the image view, the center of the filter in the image view keeps appearing in the lower left hand side of it regardless of where I tapped inside. If i keep tapping around the image view, the center does seem to move around a bit, but its no where near where I'm actually tapping.
any insight would be greatly appreciated, thank you.
Keep two things in mind.
First (and I think you probably know this), the CI origin (0,0) is lower left, not top left.
Second (and I think this is the issue) UIKit (meaning UIImage and potentially CGPoint coordinates) are not the same as CIVector coordinates. You need to take the UIKit touchesBegan coordinate and turn it into the CIImage.extent coordinate.
EDIT:
All coordinates that follow are X then Y, and Width then Height.
After posting my comment I thought I'd give an example of what I mean by scaling. Let's say you have a UIImageView sized at 250x250, using a content mode of AspectFit, displaying an image whose size is 1000x500.
Now, let's say the touchesBegan is CGPoint(200,100). (NOTE: If your UIImageView is part of a larger superview, it could be something more like 250,400 - I'm working on the point within the UIImageView.)
Scaling down the image size (remember, AspectFit) means the image is actually centered vertically (landscape appearing) within the UIImageView at CGRect(0, 62.5, 250, 125). So first off, good! The touch point not only began within the image view, it also began wishing the image. (You'll probably want to consider the not-so-edge case of touches beginning outside of the image.)
Dividing by 4 gives you the scaled down image view coordinates, and as you'd expect, multiplying up will give you the needed vector coordinates. So a touchesBegan CGPoint(200,100) turns into a CIVector(800,400).
I have some code written - not much in the way of comments, done in Swift 2 (I think) and very poorly written - that is part of a subclass (probably should have been an extension) of UIImageView that computes all this. Using the UIImageView's bounds and it's image's size is what you need. Keep in mind - images in AspectFit can also be scaled up!
One last note on CIImage - extent. Many times it's a UIImage's size. But many masks and generated output may have an infinite eatent.
SECOND EDIT:
I made a stupid mistake in my scaling example. Remember, the CIImage Origin is bottom left, not upper left. So in my example a CGPoint(200,100), scaled to CGPoint(800,400) would be CGVector(800,100).
THIRD EDIT:
Apologies for the multiple/running edits, but it seems important. (Besides, only the last was due my stupidity! Worthwhile, to note, but still.)
Now we're talking "near real time" updating using a Core Image filter. I'm planning to eventually have some blog posts on this, but the real source you want is Simon Gladman (he's moved on, look back to his posts in 2015-16), and his eBook Core Image for Swift (uses Swift 2 but most is automatically upgraded to Swift 3). Just giving credit where it is due.
If you want "near real time" usage of Core Image, you need to use the GPU. UIView, and all it's subclasses (meaning UIKit) uses the CPU. That's okay, using the GPU means using a Core Graphics, and specifically using a GLKView. It's the CG equivalent of a UIImage.
Here's my subclass of it:
open class GLKViewDFD: GLKView {
var renderContext: CIContext
var myClearColor:UIColor!
var rgb:(Int?,Int?,Int?)!
open var image: CIImage! {
didSet {
setNeedsDisplay()
}
}
public var clearColor: UIColor! {
didSet {
myClearColor = clearColor
}
}
public init() {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(frame: CGRect.zero)
context = eaglContext!
}
override public init(frame: CGRect, context: EAGLContext) {
renderContext = CIContext(eaglContext: context)
super.init(frame: frame, context: context)
enableSetNeedsDisplay = true
}
public required init?(coder aDecoder: NSCoder) {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(coder: aDecoder)
context = eaglContext!
enableSetNeedsDisplay = true
}
override open func draw(_ rect: CGRect) {
if let image = image {
let imageSize = image.extent.size
var drawFrame = CGRect(x: 0, y: 0, width: CGFloat(drawableWidth), height: CGFloat(drawableHeight))
let imageAR = imageSize.width / imageSize.height
let viewAR = drawFrame.width / drawFrame.height
if imageAR > viewAR {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / imageAR) / 2.0
drawFrame.size.height = drawFrame.width / imageAR
} else {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * imageAR) / 2.0
drawFrame.size.width = drawFrame.height * imageAR
}
rgb = (0,0,0)
rgb = myClearColor.rgb()
glClearColor(Float(rgb.0!)/256.0, Float(rgb.1!)/256.0, Float(rgb.2!)/256.0, 0.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(image, in: drawFrame, from: image.extent)
}
}
}
A few notes.
I absolutely need to credit Objc.io for much of this. This is also a great resource for Swift and UIKit coding.
I wanted AspectFit content mode with the potential to change the "backgroundColor" of the GLKView, which is why I subclassed and and called if clearColor.
Between the two resources I linked to, you should have what you need to have a good performing, near real time use of Core Image, using the GPU. One reason my afore-mentioned code to use scaling after getting the output of a filter was never updated? It didn't need it.
Lots here to process, I know. But I've found this side of things (Core Image effects) to be the most fun side (and pretty cool too) of iOS.