Erasing pixels from image and then reverting them back - swift

I'm creating an eraser app where i have a picture and the user can erase the background with the following func :
func eraseImage( image: UIImage ,line: [CGPoint] ,brushSize: CGFloat) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(proxy.size, false, 0)
let context = UIGraphicsGetCurrentContext()
let rect = AVMakeRect(aspectRatio: image.size, insideRect: CGRect(x: 0, y:0, width: proxy.size.width, height: proxy!.size.height))
//lassoImageView.image?.draw(in: calculateRectOfImageInImageView(imageView: lassoImageView))
image.draw(in: rect, blendMode: .normal, alpha: 1)
context?.move(to: CGPoint(x: line.first!.x - 50, y: line.first!.y - 50))
for pointIndex in 1..<line.count {
context?.addLine(to: CGPoint(x: line[pointIndex].x - 50, y: line[pointIndex].y - 50))
}
context?.setBlendMode(.clear)
context?.setLineCap(.round)
context?.setLineWidth(brushSize)
context?.setShadow(offset: CGSize(width: 0, height: 0), blur: 8)
context?.strokePath()
if let img = UIGraphicsGetImageFromCurrentImageContext() {
return img
}
UIGraphicsEndImageContext()
return nil
}
I'm struggling with figuring out how can i add a drawing function where the user can correct it's earaser mistake and draw back some parts.
I'm thinking of drawing on the edited image the original image with a mask of the path which tracks the cgpoints of the users location. is that possible?

I would suggest setting up your image with a second CALayer installed as a mask. Install an image view into the mask layer's contents, and draw with clear/opaque colors into the mask to mask/expose pixels from the image layer.

Related

Get preview image from dae file, to display in UIImage

I'm building an augmented reality app and would like to show preview images of the available DAE files.
Is it possible to use Swift to extract image data from a DAE file and to load that into an UIImage?
Out of the box there is no function to call that will pull a screen shot from a .dae file. Thats not to say that there isn't a way to do this on device.
To understand what you are asking lets consider the different components.
The .dae file is loaded using an ARSCNView, which is a reader for .dae files. The ARSCNView is traditionally added as a subview to your ViewController. In this instance you would need the viewer to render the 3D file offscreen so you need to create an ARSCNView but do not add it as a subview of the VC (i would add it during the initial setup to make sure it all loads, but then comment that line out)
Once you have your 3D loaded into your ARSCNView, use the below function to rip the context of the view into a JPG image
let viewSnapshot = takeSnapshotOfView(view: yourARSCNView)
and to rip the context and resize the JPG:
import UIKit
import CoreGraphics
func takeSnapshotOfView(view:UIView) -> UIImage? {
UIGraphicsBeginImageContext(CGSize(width: view.frame.size.width, height: view.frame.size.height))
view.drawHierarchy(in: CGRect(x: 0.0, y: 0.0, width: view.frame.size.width, height: view.frame.size.height), afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image?.resize(CGRect(x: 0, y: 0, width: 300, height: (image?.size.height)! / 300 * (image?.size.width)!)) // resize image before returning
}
extension UIImage {
func resize(_ toSize: CGRect) -> UIImage {
let size = self.size
let widthRatio = toSize.width / self.size.width
let heightRatio = toSize.height / self.size.height
var newSize: CGSize
if widthRatio > heightRatio {
newSize = CGSize(width: size.width * heightRatio, height: size.height * heightRatio)
} else {
newSize = CGSize(width: size.width * widthRatio, height: size.height * widthRatio)
}
let rect = CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height)
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
self.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}

Getting black and white image after applying mask

I am removing background from image containing at least one human body. I am applying mask to UIImage and successfully masked image containing only human body with transparent background, but if I tried to convert to UIImage back after applying mask I am getting black and white cropped image.
I am using this snippet to apply mask and get result as UIImage
func maskImage(image:UIImage, mask:(UIImage))->UIImage{
let imageReference = image.cgImage
let maskReference = mask.cgImage
let imageMask = CGImage(maskWidth: maskReference!.width,
height: maskReference!.height,
bitsPerComponent: maskReference!.bitsPerComponent,
bitsPerPixel: maskReference!.bitsPerPixel,
bytesPerRow: maskReference!.bytesPerRow,
provider: maskReference!.dataProvider!, decode: nil, shouldInterpolate: true)
let maskedReference = imageReference!.masking(imageMask!)
let maskedImage = UIImage(cgImage:maskedReference!)
return maskedImage
}
but I am getting black and white image with only human body instead of coloured.
After doing much research I finally got answer as below:
UIGraphicsBeginImageContextWithOptions(imgView1.frame.size, _: false, _: 0.0)
let context = UIGraphicsGetCurrentContext()
context?.translateBy(x: 0.0, y: (imgView1?.frame.size.height)!)
context?.scaleBy(x: 1.0, y: -1.0)
let maskImage = maskImg.cgImage
context?.clip(to: imgView1!.bounds, mask: maskImage!)
context?.translateBy(x: 0.0, y: (imgView1?.frame.size.height)!)
context?.setStrokeColor(UIColor.red.cgColor)
context?.stroke(imgView1!.frame, width: 15.0)
context?.scaleBy(x: 1.0, y: -1.0)
imgView1?.image?.draw(in: imgView1?.bounds ?? CGRect.zero)
let image: UIImage? = UIGraphicsGetImageFromCurrentImageContext()

How to make a UIImage be a blur effect view?

Ok, Im working in Swift here and there are a lot of answers like this How to use UIVisualEffectView? that talk about how to apply a UIVisualEffectView OVER an image, so that it blurs it like a background.
My problem is I need to have my image, or rather the outline of my image BE the Blur view - meaning I create a blur UIVisualEffectView in the shape of my image so the "color" of the image itself is the blur. An example mockup (pretend that is a blur):
I know you can trace a UIImage into a custom color like this:
func overlayImage(color: UIColor, img: UIImage) -> UIImage {
UIGraphicsBeginImageContextWithOptions(img.size, false, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()
color.setFill()
context!.translateBy(x: 0, y: img.size.height)
context!.scaleBy(x: 1.0, y: -1.0)
context!.setBlendMode(CGBlendMode.colorBurn)
let rect = CGRect(x: 0, y: 0, width: img.size.width, height: img.size.height)
context!.draw(img.cgImage!, in: rect)
context!.setBlendMode(CGBlendMode.sourceIn)
context!.addRect(rect)
context!.drawPath(using: CGPathDrawingMode.fill)
let coloredImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return coloredImage!
}
But I cant get my UIImageView to "mask" the blur view and achieve the effect. Right now with this attempt:
var img = UIImageView(image: UIImage(named: "dudeIco"))
img.frame = CGRect(x: 0, y: 0, width: self.bounds.width * 0.7, height: self.bounds.width * 0.7)
img.center = CGPoint(x: self.bounds.width/2, y: self.bounds.height/2)
self.addSubview(img)
let blur = UIVisualEffectView(effect: UIBlurEffect(style:
UIBlurEffectStyle.light))
blur.frame = img.bounds
blur.isUserInteractionEnabled = false
img.insertSubview(blur, at: 0)
I just get a blurred square. I need the shape of the image. How can I do this? Is this impossible?

how to change specific color in image to a different color

I have an UIView that its layer contents is an image.
let image = UIImage(names: "myImage")
layer.contents = image.CGImage
This image has a few colors.
Is there a way to change a specific color to any other color of my choice?
I found answers for changing the all of the colors in the image but not a specific one.
answer
You can't change specific color in PNG with transparent background, but.. i found the solution.
extension UIImage {
func maskWithColors(color: UIColor) -> UIImage? {
let maskingColors: [CGFloat] = [100, 255, 100, 255, 100, 255] // We should replace white color.
let maskImage = cgImage! //
let bounds = CGRect(x: 0, y: 0, width: size.width * 3, height: size.height * 3) // * 3, for best resolution.
let sz = CGSize(width: size.width * 3, height: size.height * 3) // Size.
var returnImage: UIImage? // Image, to return
/* Firstly we will remove transparent background, because
maskingColorComponents don't work with transparent images. */
UIGraphicsBeginImageContextWithOptions(sz, true, 0.0)
let context = UIGraphicsGetCurrentContext()!
context.saveGState()
context.scaleBy(x: 1.0, y: -1.0) // iOS flips images upside down, this fix it.
context.translateBy(x: 0, y: -sz.height) // and this :)
context.draw(maskImage, in: bounds)
context.restoreGState()
let noAlphaImage = UIGraphicsGetImageFromCurrentImageContext() // new image, without transparent elements.
UIGraphicsEndImageContext()
let noAlphaCGRef = noAlphaImage?.cgImage // get CGImage.
if let imgRefCopy = noAlphaCGRef?.copy(maskingColorComponents: maskingColors) { // Magic.
UIGraphicsBeginImageContextWithOptions(sz, false, 0.0)
let context = UIGraphicsGetCurrentContext()!
context.scaleBy(x: 1.0, y: -1.0)
context.translateBy(x: 0, y: -sz.height)
context.clip(to: bounds, mask: maskImage) // Remove background from image with mask.
context.setFillColor(color.cgColor) // set new color. We remove white color, and set red.
context.fill(bounds)
context.draw(imgRefCopy, in: bounds) // draw new image
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
returnImage = finalImage! // YEAH!
UIGraphicsEndImageContext()
}
return returnImage
}
}
For call this function use code like this...
let image = UIImage(named: "Brush").maskWithColor(color: UIColor.red)
Result:
You can not change the image color... The only way is to change the image on any event or something...
Another variant is to create one image with transparent color, and set the background color of the view or something where you put the image...

How to procedurally draw rectangle / lines in swift using CGContext

I've been trawling the internet for days trying to find the simplest code examples on how to draw a rectangle or lines procedurally in Swift. I have seen how to do it by overriding the DrawRect command. I believe you can create a CGContext and then drawing into an image, but I'd love to see some simple code examples. Or is this a terrible approach? Thanks.
class MenuController: UIViewController
{
override func viewDidLoad()
{
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
self.view.backgroundColor = UIColor.blackColor()
var logoFrame = CGRectMake(0,0,118,40)
var imageView = UIImageView(frame: logoFrame)
imageView.image = UIImage(named:"Logo")
self.view.addSubview(imageView)
//need to draw a rectangle here
}
}
Here's an example that creates a custom UIImage containing a transparent background and a red rectangle with lines crossing diagonally through it.
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad();
let imageSize = CGSize(width: 200, height: 200)
let imageView = UIImageView(frame: CGRect(origin: CGPoint(x: 100, y: 100), size: imageSize))
self.view.addSubview(imageView)
let image = drawCustomImage(size: imageSize)
imageView.image = image
}
}
func drawCustomImage(size: CGSize) -> UIImage {
// Setup our context
let bounds = CGRect(origin: .zero, size: size)
let opaque = false
let scale: CGFloat = 0
UIGraphicsBeginImageContextWithOptions(size, opaque, scale)
let context = UIGraphicsGetCurrentContext()!
// Setup complete, do drawing here
context.setStrokeColor(UIColor.red.cgColor)
context.setLineWidth(2)
context.stroke(bounds)
context.beginPath()
context.move(to: CGPoint(x: bounds.minX, y: bounds.minY))
context.addLine(to: CGPoint(x: bounds.maxX, y: bounds.maxY))
context.move(to: CGPoint(x: bounds.maxX, y: bounds.minY))
context.addLine(to: CGPoint(x: bounds.minX, y: bounds.maxY))
context.strokePath()
// Drawing complete, retrieve the finished image and cleanup
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image!
}
An updated answer using Swift 3.0
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad();
let imageSize = CGSize(width: 200, height: 200)
let imageView = UIImageView(frame: CGRect(origin: CGPoint(x: 100, y: 100), size: imageSize))
self.view.addSubview(imageView)
let image = drawCustomImage(size: imageSize)
imageView.image = image
}
}
func drawCustomImage(size: CGSize) -> UIImage? {
// Setup our context
let bounds = CGRect(origin: CGPoint.zero, size: size)
let opaque = false
let scale: CGFloat = 0
UIGraphicsBeginImageContextWithOptions(size, opaque, scale)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
// Setup complete, do drawing here
context.setStrokeColor(UIColor.red.cgColor)
context.setLineWidth(5.0)
// Would draw a border around the rectangle
// context.stroke(bounds)
context.beginPath()
context.move(to: CGPoint(x: bounds.maxX, y: bounds.minY))
context.addLine(to: CGPoint(x: bounds.minX, y: bounds.maxY))
context.strokePath()
// Drawing complete, retrieve the finished image and cleanup
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
let imageSize = CGSize(width: 200, height: 200)
let imageView = UIImageView(frame: CGRect(origin: CGPoint(x: 100, y: 100), size: imageSize))
let image = drawCustomImage(size: imageSize)
imageView.image = image
I used the accepted answer to draw lines in a Tic Tac Toe game when one of the players won. Thanks, good to know that it worked. Unfortunately, I ran into some problems getting it to work on different sizes of iPhones and iPads simultaneously. That's probably something that should have been addressed. Basically what I'm saying is that it might not actually be worth the trouble of all that code, depending on your case.
My alternate solution is to simply make customized, better looking line in Photoshop and then load it with UIImageView. For me this was MUCH simpler, runs better, and looks better. Obviously it really depends on what you need it for.
Steps:
1: Download or create an image (preferably saved as .PNG)
2: Drag it into your project
3: Drag a UIImage View into your storyboard
4: Click on the Image View and select the image in the attributes inspector
5: Ctrl click and drag the Image View to your .swift file to declare an Outlet
6: Set the autolayout constraints so it works on ALL devices EASILY
Animating, rotating, and transforming image views on and off the screen is also arguably easier
To change the image:
yourImageViewOutletName.image = UIImage(named: "ImageNameHere")