Crop image in swift - swift

I am trying to crop image in swift. I'm trying to implement something like, user will capture a photo. Once photo is captured user will be allowed to set the crop area. I'm able to get the image from that crop area, but I want that the crop image should be resized to particular width and height. That is, if particular height or width is smaller then it should be resized.
This image should be of frame of it's maximum width and height. Currently it is just adding transparency to the other area.
I had also added my code for cropping
let tempLayer = CAShapeLayer()
tempLayer.frame = self.view.frame
let path = UIBezierPath()
var endPoint: CGPoint!
for (var i = 0; i<4; i++){
let tag = 101+i
let pointView = viewCrop.viewWithTag(tag)
switch (pointView!.tag){
case 101:
endPoint = CGPointMake(pointView!.center.x-20, pointView!.center.y-20)
path.moveToPoint(endPoint)
default:
path.addLineToPoint(CGPointMake(pointView!.center.x-20, pointView!.center.y-20))
}
}
path.addLineToPoint(endPoint)
path.closePath()
tempLayer.path = path.CGPath
tempLayer.fillColor = UIColor.whiteColor().CGColor
tempLayer.backgroundColor = UIColor.clearColor().CGColor
imgReceiptView.layer.mask = tempLayer
UIGraphicsBeginImageContextWithOptions(viewCrop.bounds.size, imgReceiptView.opaque, 0.0);
imgReceiptView.layer.renderInContext(UIGraphicsGetCurrentContext())
let cropImg = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(cropImg, nil, nil, nil)
imgReceiptView.hidden = true
let tempImageView = UIImageView(frame: CGRectMake(20,self.view.center.y-80, self.view.frame.width-40,160))
tempImageView.backgroundColor = UIColor.grayColor()
tempImageView.image = cropImg
tempImageView.tag = 1001
tempImageView.layer.masksToBounds = true
self.view.addSubview(tempImageView)
Any help will be appreciable
Thanks in advance

Use this Library to crop image as User Specific
https://github.com/kishikawakatsumi/PEPhotoCropEditor
Thanks
Hope this will help you!

Related

Screenshot in part of UIView masked to UIBezierPath / CGPath

I'm building a drawing application. I'm drawing using CGMutablePath.
I want the user to be able to select a part of the drawn paths and then move that part, like this:
I thought, a possible solution would be to mask a view to the drawn area and then take a screenshot in that view.
In here, you can see the area drawn in which I want to take a screenshot:
To take the screenshot, I get the last path drawn, being the area the screenshot is to be taken in:
let shapeLayer = CAShapeLayer()
shapeLayer.path = last.closedPath // returns CGPath.closeSubpath()
shapeLayer.lineWidth = 10
I then create an overlayView that's the view I'm taking the screenshot in.
let overlayView = UIView(frame: view.bounds)
overlayView.backgroundColor = .black
overlayView.alpha = 0.4
view.addSubview(overlayView)
view.bringSubview(toFront: overlayView)
I'm then masking the view to the path:
overlayView.mask(withPath: UIBezierPath(cgPath: last.closedPath!))
The .mask(withPath:) method comes from here:
extension UIView {
func mask(withPath path: UIBezierPath) {
let path = path
let maskLayer = CAShapeLayer()
maskLayer.path = path.cgPath
self.layer.mask = maskLayer
}
}
Then, I take the screenshot in overlayView:
let image: UIImage = {
UIGraphicsBeginImageContextWithOptions(overlayView.bounds.size, false, 0)
defer { UIGraphicsEndImageContext() }
drawView.drawHierarchy(in: overlayView.bounds, afterScreenUpdates: true)
return UIGraphicsGetImageFromCurrentImageContext()!
}()
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
What happens, is that the overlayView has the screen's full size and also draws the screenshot in the full size.
When debugging the view hierarchy, I can also see that the overlayView is still full-size, not masked to the path.
So, instead of getting only the part drawn around as screenshot, I get an image of the whole view / screen.
Question
How do I successfully mask the view to the drawn area so I can take a screenshot in that part of the screen only?
I think the overlayView.frame equals self.view.frame, which is why the image is being taken full screen.
A solution to your issue may be solved as follows (although I may have understood incorrectly):
let shapeLayer = CAShapeLayer()
shapeLayer.path = last.closedPath // returns CGPath.closeSubpath()
shapeLayer.lineWidth = 10
let rect = shapeLayer.path.boundingBoxOfPath
let overlayView = UIView(frame: rect)

How to center a CGImage on a CALayer?

I was trying to set a UIImage's CGImage as a layer's content and then add the layer to a view's layer.
It's should be five stars at the center of the yellow view. That's what I want it to be.
But it seems the center of the stars is aligned with the origin of the view.
What should I do to rectify it?
func putOnStars() {
let rect = UIView(frame: CGRect(x: 0, y: 0, width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.height))
rect.backgroundColor = .yellow
view.addSubview(rect)
let baseLayer = CALayer()
baseLayer.contents = UIImage(named: "stars")?.cgImage
baseLayer.contentsGravity = kCAGravityCenter
rect.layer.addSublayer(baseLayer)
}
Here is the stars image for you in case of you want to test.
baseLayer doesn't have a defined frame so baseLayer.contentsGravity = kCAGravityCenter will work fine but it'll still be on the potision (0, 0).
There are two possible solutions:
1 : Make a frame of baseLayer that is identical to rect. Implement this code:
baseLayer.frame = rect.frame
2 : Set the position of baseLayer to the center of rect. Implement this code:
baseLayer.position = rect.center
To place the stars image in the centre of the CALayer, give the frame of the layer, i.e.
let baseLayer = CALayer()
baseLayer.frame = rect.bounds //This Line
baseLayer.contents = UIImage(named: "stars")?.cgImage
baseLayer.contentsGravity = kCAGravityCenter
rect.layer.addSublayer(baseLayer)
For any kind of CALayer, you need to define its frame explicitly.

Images being flipped when adding to NSAttributedString

I have a strange problem when resizing an image that's in a NSAttributedString. The resizing extension is working fine, but when the image is added to the NSAttributedString, it gets flipped vertically for some reason.
This is the resizing extension:
extension NSImage {
func resize(containerWidth: CGFloat) -> NSImage {
var scale : CGFloat = 1.0
let currentWidth = self.size.width
let currentHeight = self.size.height
if currentWidth > containerWidth {
scale = (containerWidth * 0.9) / currentWidth
}
let newWidth = currentWidth * scale
let newHeight = currentHeight * scale
self.size = NSSize(width: newWidth, height: newHeight)
return self
}
}
And here is the enumeration over the images in the attributed string:
newAttributedString.enumerateAttribute(NSAttributedStringKey.attachment, in: NSMakeRange(0, newAttributedString.length), options: []) { value, range, stop in
if let attachement = value as? NSTextAttachment {
let image = attachement.image(forBounds: attachement.bounds, textContainer: NSTextContainer(), characterIndex: range.location)!
let newImage = image.resize(containerWidth: markdown.bounds.width)
let newAttribute = NSTextAttachment()
newAttribute.image = newImage
newAttributedString.addAttribute(NSAttributedStringKey.attachment, value: newAttribute, range: range)
}
}
I've set breakpoints and inspected the images, and they are all in the correct rotation, except when it reaches this line:
newAttributedString.addAttribute(NSAttributedStringKey.attachment, value: newAttribute, range: range)
where the image gets flipped vertically.
I have no clue what could be causing this vertical flip. Is there a way to fix this?
If you look at the developer docs for NSTextAttachment:
https://developer.apple.com/documentation/uikit/nstextattachment
The bounds parameter is defined as follows:
“Defines the layout bounds of the receiver's graphical representation in the text coordinate system.”
I know that when using CoreText to layout text, you need to flip the coordinates, so I should imagine you need to transform your bounds parameter with a vertical reflection too.
Hope that helps.
I figured it out and it was so much simpler than I was making it.
Because the image was in a NSAttribuetdString being appended into a NSTextView I didn't need to resize each image in the NSAttributedString, rather I just had to set the attachment scaling inside the NSTextView with
markdown.layoutManager?.defaultAttachmentScaling = NSImageScaling.scaleProportionallyDown
One line is all it took

Image Cropping grabbing the wrong portion of UIImage during crop

I've been working on making a view controller that will crop an image down to a specific size with some draggable control points and the background image outside of the crop zone dimmed.
For some reason whenever the image is cropped, it is grabbing the wrong reference. I've looked at just about every other post on this to deal with cropping.
Here is my setup for the Storyboard:
I've asked a few other people including a tutor and mentor from a course that I'm taking, but we all seem to be stumped.
I can select a frame by dragging the UL UR DL DR corners around the view controller like this:
But when I press the button and use the crop function I've written, I get something that is not the correct crop based on the framed selection.
I also get this error message during the cropping proceedure:
2016-09-07 23:36:38.962 ImageCropView[33133:1056024]
<UIView: 0x7f9cfa42c730; frame = (0 0; 414 736); autoresize = W+H; layer = <CALayer: 0x7f9cfa408400>>'s window
is not equal to <ImageCropView.CroppedImageViewController: 0x7f9cfa43f9b0>'s view's window!
The offending part of the code must be somewhere in one of the functions below.
Here is the cropping function:
func cropImage(image: UIImage, toRect rect: CGRect) -> UIImage {
func rad(deg: CGFloat) -> CGFloat {
return deg / 180.0 * CGFloat(M_PI)
}
// determine the orientation of the image and apply a transformation to the crop rectangle to shift it to the correct position
var rectTransform: CGAffineTransform
switch image.imageOrientation {
case .Left:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(90)), 0, -image.size.height)
case .Right:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-90)), -image.size.width, 0)
case .Down:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-180)), -image.size.width, -image.size.height)
default:
rectTransform = CGAffineTransformIdentity
}
// adjust the transformation scale based on the image scale
rectTransform = CGAffineTransformScale(rectTransform, UIScreen.mainScreen().scale, UIScreen.mainScreen().scale)
// apply the transformation to the rect to create a new, shifted rect
let transformedCropSquare = CGRectApplyAffineTransform(rect, rectTransform)
// use the rect to crop the image
let imageRef = CGImageCreateWithImageInRect(image.CGImage, transformedCropSquare)
// create a new UIImage and set the scale and orientation appropriately
let result = UIImage(CGImage: imageRef!, scale: image.scale, orientation: image.imageOrientation)
return result
}
Here are the functions to set and translate the mask view
func setTopMask(){
let path = CGPathCreateWithRect(cropViewMask.frame, nil)
topMaskLayer.path = path
topImageView.layer.mask = topMaskLayer
}
func translateMask(sender: UIPanGestureRecognizer) {
let translation = sender.translationInView(self.view)
sender.view!.center = CGPointMake(sender.view!.center.x + translation.x, sender.view!.center.y + translation.y)
// print(sender.translationInView(self.view))
sender.setTranslation(CGPointZero, inView: self.view)
// print("panned mask")
if sender.state == .Ended {
printFrames()
}
}
func setCropMaskFrame() {
let x = ulCorner.center.x
let y = ulCorner.center.y
let width = urCorner.center.x - ulCorner.center.x
let height = blCorner.center.y - ulCorner.center.y
cropViewMask.frame = CGRectMake(x, y, width, height)
setTopMask()
}
I know this was long time ago...Just a thought, I ran into similar problem and what I found is that the frames for cropping are most probably correct. The problem lies in the actual size of the picture you're trying to crop. I solved the issue by aligning sizes of my view which holds the picture, with the actual picture size (in points). Then the cropping area cropped what was selected. I know this is probably not a solution, just sharing my experience, hope it helps to turn on some lightbulbs :)

How to scale up an UIImage without smoothening anything?

I want to scale up an UIImage in such a way, that the user can see the pixels in the UIImage very sharp. When I put that to an UIImageView and scale the transform matrix up, the UIImage appears antialiased and smoothed.
Is there a way to render in a bigger bitmap context by simply repeating every row and every column to get bigger pixels? How could I do that?
#import <QuartzCore/CALayer.h>
view.layer.magnificationFilter = kCAFilterNearest
When drawing directly into bitmap context, we can use:
CGContextSetInterpolationQuality(myBitmapContext, kCGInterpolationNone);
I found this on CGContextDrawImage very slow on iPhone 4
Swift 5
let image = UIImage(named: "Foo")
let scaledImageSize = image.size.applying(CGAffineTransform(scaleX: 2, y: 2))
UIGraphicsBeginImageContext(scaledImageSize)
let scaledContext = UIGraphicsGetCurrentContext()!
scaledContext.interpolationQuality = .none
image.draw(in: CGRect(origin: .zero, size: scaledImageSize))
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()!
I was also trying this (on a sublayer) and I couldn't get it working, it was still blurry. This is what I had to do:
const CGFloat PIXEL_SCALE = 2;
layer.magnificationFilter = kCAFilterNearest; //Nearest neighbor texture filtering
layer.transform = CATransform3DMakeScale(PIXEL_SCALE, PIXEL_SCALE, 1); //Scale layer up
//Rasterize w/ sufficient resolution to show sharp pixels
layer.shouldRasterize = YES;
layer.rasterizationScale = PIXEL_SCALE;
For UIImage created from CIImage you may use:
imageView.image = UIImage(CIImage: ciImage.imageByApplyingTransform(CGAffineTransformMakeScale(kScale, kScale)))