Crop quadrilateral image from image to swiftUI [duplicate] - swift

I would like to clip a bezier path from an image. For some reason, the image remains the unclipped. And how do I position the path so it would be properly cut?
extension UIImage {
func imageByApplyingMaskingBezierPath(_ path: UIBezierPath, _ pathFrame: CGFrame) -> UIImage {
UIGraphicsBeginImageContext(self.size)
let context = UIGraphicsGetCurrentContext()!
context.saveGState()
path.addClip()
draw(in: CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height))
let maskedImage = UIGraphicsGetImageFromCurrentImageContext()!
context.restoreGState()
UIGraphicsEndImageContext()
return maskedImage
}
}

You need to add your path.cgPath to your current context, also you need to remove context.saveGState() and context.restoreGState()
Use this code
func imageByApplyingMaskingBezierPath(_ path: UIBezierPath, _ pathFrame: CGRect) -> UIImage {
UIGraphicsBeginImageContext(self.size)
let context = UIGraphicsGetCurrentContext()!
context.addPath(path.cgPath)
context.clip()
draw(in: CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height))
let maskedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return maskedImage
}
Using it
let testPath = UIBezierPath()
testPath.move(to: CGPoint(x: self.imageView.frame.width / 2, y: self.imageView.frame.height))
testPath.addLine(to: CGPoint(x: 0, y: 0))
testPath.addLine(to: CGPoint(x: self.imageView.frame.width, y: 0))
testPath.close()
self.imageView.image = UIImage(named:"Image")?.imageByApplyingMaskingBezierPath(testPath, self.imageView.frame)
Result

You can try like this.
var path = UIBezierPath()
var shapeLayer = CAShapeLayer()
var cropImage = UIImage()
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first as UITouch?{
let touchPoint = touch.location(in: self.YourimageView)
print("touch begin to : \(touchPoint)")
path.move(to: touchPoint)
}
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first as UITouch?{
let touchPoint = touch.location(in: self.YourimageView)
print("touch moved to : \(touchPoint)")
path.addLine(to: touchPoint)
addNewPathToImage()
}
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first as UITouch?{
let touchPoint = touch.location(in: self.YourimageView)
print("touch ended at : \(touchPoint)")
path.addLine(to: touchPoint)
addNewPathToImage()
path.close()
}
}
func addNewPathToImage(){
shapeLayer.path = path.cgPath
shapeLayer.strokeColor = strokeColor.cgColor
shapeLayer.fillColor = UIColor.clear.cgColor
shapeLayer.lineWidth = lineWidth
YourimageView.layer.addSublayer(shapeLayer)
}
func cropImage(){
UIGraphicsBeginImageContextWithOptions(YourimageView.bounds.size, false, 1)
tempImageView.layer.render(in: UIGraphicsGetCurrentContext()!)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
self.cropImage = newImage!
}
#IBAction func btnCropImage(_ sender: Any) {
cropImage()
}
Once you draw a path on particular button action just call your imageByApplyingMaskingBezierPath

Here is Swift code to get clips from an image based on a UIBezierPath and it is very quickly implemented. This method works if the image is already being shown on the screen, which will most often be the case. The resultant image will have a transparent background, which is what most people want when they clip part of a photo image. You can use the resultant clipped image locally because you will have it in a UIImage object that I called imageWithTransparentBackground. This very simple code also shows you how to save the image to the camera roll, and how to also put it right into the pasteboard so a user can paste that image directly into a text message, paste it into Notes, an email, etc. Note that in order to write the image to the camera roll, you need to edit the info.plist and provide a reason for “Privacy - Photo Library Usage Description”
import Photos // Needed if you save to the camera roll
Provide a UIBezierPath for clipping. Here is my declaration for one.
let clipPath = UIBezierPath()
Populate the clipPath with some logic of your own using some combination of commands. Below are a few I used in my drawing logic. Provide CGPoint equivalents for aPointOnScreen, etc Build your path relative to the main screen as self.view is this apps ViewController (for this code), and self.view.layer is rendered through the clipPath.
clipPath.move(to: aPointOnScreen)
clipPath.addLine(to: otherPointOnScreen)
clipPath.addLine(to: someOtherPointOnScreen)
clipPath.close()
This logic uses all of the devices screen as the context size. A CGSize is declared for that. fullScreenX and fullScreenY are my variables where I have already captured the devices width and height. It is nice if the photo you are clipping from is already zoomed into and is an adequate size as shown on the whole of the screen. What you see, is what you get.
let mainScreenSize = CGSize(width: fullScreenX, height: fullScreenY)
// Get an empty context
UIGraphicsBeginImageContext(mainScreenSize)
// Specify the clip path
clipPath.addClip()
// Render through the clip path from the whole of the screen.
self.view.layer.render(in: UIGraphicsGetCurrentContext()!)
// Get the clipped image from the context
let image : UIImage = UIGraphicsGetImageFromCurrentImageContext()!
// Done with the context, so end it.
UIGraphicsEndImageContext()
// The PNG data has the alpha channel for the transparent background
let imageData = image.pngData()
// Below is the local UIImage to use within your code
let imageWithTransparentBackground = UIImage.init(data: imageData!)
// Make the image available to the pasteboard.
UIPasteboard.general.image = imageWithTransparentBackground
// Save the image to the camera roll.
PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAsset(from: imageWithTransparentBackground!)
}, completionHandler: { success, error in
if success {
//
}
else if let error = error {
//
}
else {
//
}
})

Related

Draw graphics and export with pixel precision with CoreGraphics

I saw few questions here on stackoverflow but none of them is solving my problem. What I want to do is to subclass NSView and draw some shapes on it. Then I want to export/save created graphics to png file. And while drawing is quite simple, I want to be able to store image with pixel precision - I know that drawing is being done in points instead of pixels. So what I am doing is I override draw() method to draw any graphic like so:
override func draw(_ dirtyRect: NSRect) {
super.draw(dirtyRect)
NSColor.white.setFill()
dirtyRect.fill()
NSColor.green.setFill()
NSColor.green.setStroke()
currentContext?.beginPath()
currentContext?.setLineWidth(1.0)
currentContext?.setStrokeColor(NSColor.green.cgColor)
currentContext?.move(to: CGPoint(x: 0, y: 0))
currentContext?.addLine(to: CGPoint(x: self.frame.width, y: self.frame.height))
currentContext?.closePath()
}
and since on screen it looks OK, after saving this to file is not what I expected. I set line width to 1 but in exported file it is 2 pixels wide. And to save image, I create NSImage from current view:
func getImage() -> NSImage? {
let size = self.bounds.size
let imageSize = NSMakeSize(size.width, size.height)
guard let imageRepresentation = self.bitmapImageRepForCachingDisplay(in: self.bounds) else {
return nil
}
imageRepresentation.size = imageSize
self.cacheDisplay(in: self.bounds, to: imageRepresentation)
let image = NSImage(size: imageSize)
image.addRepresentation(imageRepresentation)
return image
}
and this image is then save to file:
do {
guard let image = self.canvasView?.getImage() else {
return
}
let imageRep = image.representations.first as? NSBitmapImageRep
let data = imageRep?.representation(using: .png, properties: [:])
try data?.write(to: url, options: .atomic)
} catch {
print(error.localizedDescription)
}
Do you have any tips of what I am doing wrong?

Undoing a path drawn on UIImageView in Swift

I made an erase-effect on UIImageView by drawing a path with blend mode set to clear using the code below. How can I undo a given path, meaning restoring the original image under that path?
func erase(from fromPoint: CGPoint, to toPoint: CGPoint) {
UIGraphicsBeginImageContext(self.frame.size)
image?.draw(in: self.bounds)
defer {
self.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
let path = CGMutablePath()
path.move(to: fromPoint)
path.addLine(to: toPoint)
let context = UIGraphicsGetCurrentContext()!
context.setShouldAntialias(true)
context.setLineCap(.round)
context.setLineWidth(40)
context.setBlendMode(.clear)
context.addPath(path)
context.strokePath()
}
The issue is that you creating a new UIImage, discarding what was there.
Two options:
Save a copy of your old image and restore it when you want to revert.
Alternatively, don’t alter the image at all, and instead just apply a mask to the UIImageView:
extension UIImageView {
func erase(from fromPoint: CGPoint, to toPoint: CGPoint) {
let maskImage = UIGraphicsImageRenderer(bounds: bounds).image { rendererContext in
let context = rendererContext.cgContext
context.fill(bounds)
let path = CGMutablePath()
path.move(to: fromPoint)
path.addLine(to: toPoint)
context.setShouldAntialias(true)
context.setLineCap(.round)
context.setLineWidth(40)
context.setBlendMode(.clear)
context.addPath(path)
context.strokePath()
}
mask = UIImageView(image: maskImage)
}
}
Then, when you want to reverse it, just remove the mask.

Merge an image and text creating another image in swift 4

I am trying to merge an UIImage and a text of an UITextField creating another Image, but I hadn't any success.
What does my app to do? or should it do?
Basically it takes the image created by the method snapshot, merge that image with a text from UITextField, creating an other image that will be save and shows in a tableView.
But I'm having a lot of trouble making it work.
When I take only the image everything works well. Follow my code.
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let touch = touches.first, let startPoint = startPoint else {return}
let currentPoint = touch.location(in: imageView)
let frame = rect(from: startPoint, to: currentPoint)
rectShapeLayer.removeFromSuperlayer()
if frame.size.width < 1{
tfText.resignFirstResponder()
} else {
let memedImage = getImage(frame: frame, imageView: self.imageView)
save(imageView: imageView, image: memedImage)
}
}
func getImage(frame: CGRect, imageView: UIImageView) -> UIImage {
let cropImage = imageView.snapshot(rect: frame, afterScreenUpdates: true)
return cropImage
But when I try to create an Image using UIGraphicsBeginImageContextWithOptions to merge it with a textField, I fail.
Follow my code
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let touch = touches.first, let startPoint = startPoint else {return}
let currentPoint = touch.location(in: imageView)
let frame = rect(from: startPoint, to: currentPoint)
rectShapeLayer.removeFromSuperlayer()
if frame.size.width < 1{
tfText.resignFirstResponder()
} else {
let memedImage = getImage(frame: frame, imageView: self.imageView)
save(imageView: imageView, image: memedImage)
}
}
func getImage(frame: CGRect, imageView: UIImageView) -> UIImage {
let cropImage = imageView.snapshot(rect: frame, afterScreenUpdates: true)
UIGraphicsBeginImageContextWithOptions(cropImage.size, false, 0.0)
cropImage.draw(in: frame)
tfText.drawText(in: frame)
let newImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
Let me show you some screenshots of the my app.
First creating only image.
Now when I try to merge the text and image.
Please, look at the debug area.
The images are created, but they don’t show up on the tableView.
What am I doing wrong?
UPDATE THE QUESTION
With the code above my memedImage is empty. "Thanks Rob"
So, I changed my previous getImage(_:) to:
func getANewImage(frame: CGRect, imageView: UIImageView, textField: UITextField) -> UIImage{
let cropImage = imageView.snapshot(rect: frame, afterScreenUpdates: true)
let newImageView = UIImageView(image: cropImage)
UIGraphicsBeginImageContextWithOptions(newImageView.frame.size, false, 0.0)
let context = UIGraphicsGetCurrentContext()!
context.translateBy(x: newImageView.frame.origin.x, y: newImageView.frame.origin.y)
newImageView.layer.render(in: context)
textField.layer.render(in: context)
let newImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
That way I almost got... I created a new image with the textField, but the textField changed its position, it should be in the center.
With .draw does't work, but with layer.render works almost well.
I almost didn't have help with that question, only a litte hint of the my friend Rob. Thank you again Rob.
Likely, I found out how to fix the problem with the TextField position and I'd like to share the solution.
func getImage(frame: CGRect, imageView: UIImageView, textField: UITextField) -> UIImage{
//Get the new image after snapshot method
let cropImage = imageView.snapshot(rect: frame, afterScreenUpdates: true)
//Create new imageView with the cropImage
let newImageView = UIImageView(image: cropImage)
//Origin point of the Snapshot Frame.
let frameOriginX = frame.origin.x
let frameOriginY = frame.origin.y
UIGraphicsBeginImageContextWithOptions(newImageView.frame.size, false, cropImage.scale)
let context = UIGraphicsGetCurrentContext()!
//Render the "cropImage" in the CGContext
newImageView.layer.render(in: context)
//Position of the TextField
let tf_X = textField.frame.origin.x - frameOriginX
let tf_Y = textField.frame.origin.y - frameOriginY
//Context Translate with TextField position
context.translateBy(x: tf_X, y: tf_Y)
//Render the "TextField" in the CGContext
textField.layer.render(in: context)
//Create newImage
let newImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
Of course this code can be optimized, but it worked very well for me.

merging imageviews in drawing app swift

I am working on drawing app. I have three image views -
imageView - Contains base Image
tempImageView - for drawing annotations. drawLineFrom function takes a point and draw then lines on tempImageView
func drawLineFrom(fromPoint: CGPoint, toPoint: CGPoint)
{
//print("drawLineFrom")
let mid1 = CGPoint(x:(prevPoint1.x + prevPoint2.x)*0.5, y:(prevPoint1.y + prevPoint2.y)*0.5)
let mid2 = CGPoint(x:(toPoint.x + prevPoint1.x)*0.5, y:(toPoint.y + prevPoint1.y)*0.5)
UIGraphicsBeginImageContextWithOptions(self.tempImageView.boundsSize, false, 0.0)
if let context = UIGraphicsGetCurrentContext()
{
tempImageView.image?.draw(in: CGRect(x: 0, y: 0, width: self.tempImageView.frame.size.width, height: self.tempImageView.frame.size.height))
let annotaionPath = UIBezierPath()
annotaionPath.move(to: CGPoint(x: mid1.x, y: mid1.y))
annotaionPath.addQuadCurve(to: CGPoint(x:mid2.x,y:mid2.y), controlPoint: CGPoint(x:prevPoint1.x,y:prevPoint1.y))
annotaionPath.lineCapStyle = CGLineCap.round
annotaionPath.lineJoinStyle = CGLineJoin.round
annotaionPath.lineWidth = editorPanelView.brushWidth
context.setStrokeColor(editorPanelView.drawingColor.cgColor)
annotaionPath.stroke()
tempImageView.image = UIGraphicsGetImageFromCurrentImageContext()
tempImageView.alpha = editorPanelView.opacity
UIGraphicsEndImageContext()
}
}
drawingImageView - after each touchesEnded method I am merging tempImageView with drawingImageView and setting tempImageView.image = nil .
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?)
{
isDrawing = false
if !swiped
{
drawLineFrom(fromPoint: lastPoint, toPoint: lastPoint)
}
annotationArray.append(annotationsPoints)
annotationsPoints.removeAll()
// Merge tempImageView into drawingImageView
UIGraphicsBeginImageContext(drawingImageView.frame.size)
drawingImageView.image?.draw(in: CGRect(x: 0, y: 0, width: drawingImageView.frame.size.width, height: drawingImageView.frame.size.height), blendMode: CGBlendMode.normal, alpha: 1.0)
tempImageView.image?.draw(in: CGRect(x: 0, y: 0, width: drawingImageView.frame.size.width, height: drawingImageView.frame.size.height), blendMode: CGBlendMode.normal, alpha: editorPanelView.opacity)
drawingImageView.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
tempImageView.image = nil
}
When save button clicked,
let drawingImage = self.drawingImageView.image
let combinedImage = self.imageView.combineWithOverlay(overlayImageView: self.drawingImageView)
and I am saving combinedImage.
Problem is, when I merge tempImage view with drawing image view, the annotations get blurred.I want to maintain same clarity. I am not able to find any solution for this. Any help (even if it's just a kick in the right direction) would be appreciated.
I think the issue is with using UIGraphicsBeginImageContext(drawingImageView.frame.size).
The default scale it uses is 1.0 so if you're using a retina screen, it will cause the content to be scaled up 2 or 3 times causing the blurry appearance.
You should use UIGraphicsBeginImageContextWithOptions like you have in drawLineFrom with a scale of 0.0 which will default to the screens scale.

Cut a hole on SKNode and update its physicsBody in Swift

I'm creating an app which will contains some holes on the image.
Since the code is very big I've created this simple code which has the same idea.
This code has an image with it's physicsBody set already.
What I would like is in the touchesBegan function to draw some transparent circles at the touched location and update the image physicsBody (making a hole on the image).
I've found several codes in objective C and UIImage, can someone help with Swift and SKSpriteNode?
import SpriteKit
class GameScene: SKScene {
override func didMoveToView(view: SKView) {
let texture = SKTexture(imageNamed: "Icon.png")
let node = SKSpriteNode(texture: texture)
node.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame))
addChild(node)
node.physicsBody = SKPhysicsBody(rectangleOfSize: texture.size())
node.physicsBody?.dynamic = false
}
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
}
}
One option is to apply a mask and render that to an image and update your SKSpriteNode's texture. Then use that texture to determine the physics body. This process however will not be very performant.
For example in touchesBegan you can say something like:
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
guard let touch = touches.first else { return }
if let node = self.nodeAtPoint(touch.locationInNode(self.scene!)) as? SKSpriteNode{
let layer = self.layerFor(touch, node: node) // We'll use a helper function to create the layer
let image = self.snapShotLayer(layer) // Helper function to snapshot the layer
let texture = SKTexture(image:image)
node.texture = texture
// This will map the physical bounds of the body to alpha channel of texture. Hit in performance
node.physicsBody = SKPhysicsBody(texture: node.texture!, size: node.size)
}
}
Here we just get the node and look at the first touch, you could generalize to allow multi touch, but we then create a layer with the new image with transparency, then create a texture out of it, then update the node's physics body.
For creating the layer you can say something like:
func layerFor(touch: UITouch, node: SKSpriteNode) -> CALayer
{
let touchDiameter:CGFloat = 20.0
let layer = CALayer()
layer.frame = CGRect(origin: CGPointZero, size: node.size)
layer.contents = node.texture?.CGImage()
let locationInNode = touch.locationInNode(node)
// Convert touch to layer coordinate system from node coordinates
let touchInLayerX = locationInNode.x + node.size.width * 0.5 - touchDiameter * 0.5
let touchInLayerY = node.size.height - (locationInNode.y + node.size.height * 0.5) - touchDiameter * 0.5
let circleRect = CGRect(x: touchInLayerX, y: touchInLayerY, width: touchDiameter, height: touchDiameter)
let circle = UIBezierPath(ovalInRect: circleRect)
let shapeLayer = CAShapeLayer()
shapeLayer.frame = CGRect(x: 0.0, y: 0.0, width: node.size.width, height: node.size.height)
let path = UIBezierPath(rect: shapeLayer.frame)
path.appendPath(circle)
shapeLayer.path = path.CGPath
shapeLayer.fillRule = kCAFillRuleEvenOdd
layer.mask = shapeLayer
return layer
}
Here we just set the current texture to the contents of a CALayer then we create a CAShapeLayer for the mask we want to create. We want the mask to be opaque for most of the layer but we want a transparent circle, so we create a path of a rectangle then add a circle to it. We set the fillRule to kCAFillRuleEvenOdd to fill everything but our circle.
Lastly, we render that layer to a UIImage that we can use to update our texture.
func snapShotLayer(layer: CALayer) -> UIImage
{
UIGraphicsBeginImageContextWithOptions(layer.frame.size, false, 0.0)
let context = UIGraphicsGetCurrentContext()
layer.renderInContext(context!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
Maybe someone will provide a more performant way of accomplishing this, but I think this will work for a lot of cases.